diff options
| author | Paul Buetow <paul@buetow.org> | 2025-04-04 23:22:07 +0300 |
|---|---|---|
| committer | Paul Buetow <paul@buetow.org> | 2025-04-04 23:22:07 +0300 |
| commit | d2369f081c4544834a1f114bf33fc815f14d683c (patch) | |
| tree | 8b90f329f0db0ecba84593f2c62eda861d84a25d /gemfeed | |
| parent | 8a893a87b0bbeebeb0890475d243fabe7d499aeb (diff) | |
Update content for html
Diffstat (limited to 'gemfeed')
17 files changed, 13585 insertions, 737 deletions
diff --git a/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html b/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html index e030a2be..d5f81d05 100644 --- a/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html +++ b/gemfeed/2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html @@ -413,6 +413,7 @@ Notice: Finished catalog run in 206.09 seconds <br /> <span>Other *BSD related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> diff --git a/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.html b/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.html index 7ad59bf4..2e8782f8 100644 --- a/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.html +++ b/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.html @@ -692,6 +692,7 @@ rex commons <br /> <span>Other *BSD related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> diff --git a/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.html b/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.html index f60200f2..78a1d5d5 100644 --- a/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.html +++ b/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.html @@ -70,6 +70,7 @@ $ doas reboot <i><font color="silver"># Just in case, reboot one more time</font <br /> <span>Other *BSD related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> diff --git a/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html b/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html index e2a81eeb..5840974c 100644 --- a/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html +++ b/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html @@ -331,6 +331,7 @@ http://www.gnu.org/software/src-highlite --> <br /> <span>Other *BSD and KISS related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> diff --git a/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html b/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html index 5468cd1d..8382e37b 100644 --- a/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html +++ b/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html @@ -21,6 +21,7 @@ <br /> <span>These are all the posts so far:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)</a><br /> @@ -47,7 +48,7 @@ <li>⇢ <a href='#monitoring-keeping-an-eye-on-everything'>Monitoring: Keeping an eye on everything</a></li> <li>⇢ ⇢ <a href='#prometheus-and-grafana'>Prometheus and Grafana</a></li> <li>⇢ ⇢ <a href='#gogios-my-custom-alerting-system'>Gogios: My custom alerting system</a></li> -<li>⇢ <a href='#what-s-after-this-all'>What's after this all?</a></li> +<li>⇢ <a href='#conclusion'>Conclusion</a></li> </ul><br /> <h2 style='display: inline' id='why-this-setup'>Why this setup?</h2><br /> <br /> @@ -157,7 +158,7 @@ <br /> <span>Ironically, I implemented Gogios to avoid using more complex alerting systems like Prometheus, but here we go—it integrates well now.</span><br /> <br /> -<h2 style='display: inline' id='what-s-after-this-all'>What's after this all?</h2><br /> +<h2 style='display: inline' id='conclusion'>Conclusion</h2><br /> <br /> <span>This setup may be just the beginning. Some ideas I'm thinking about for the future:</span><br /> <br /> @@ -175,6 +176,7 @@ <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)</a><br /> diff --git a/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html b/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html index 1ecf2f2c..361ab8ec 100644 --- a/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html +++ b/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html @@ -21,6 +21,7 @@ <br /> <span>These are all the posts so far:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> @@ -354,6 +355,7 @@ dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">2922</font> <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> diff --git a/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html b/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html index b9116a4d..45841713 100644 --- a/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html +++ b/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html @@ -20,6 +20,7 @@ <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts (You are currently reading this)</a><br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -41,6 +42,7 @@ <li>⇢ <a href='#power-outage-simulation'>Power outage simulation</a></li> <li>⇢ ⇢ <a href='#pulling-the-plug'>Pulling the plug</a></li> <li>⇢ ⇢ <a href='#restoring-power'>Restoring power</a></li> +<li>⇢ <a href='#conclusion'>Conclusion</a></li> </ul><br /> <h2 style='display: inline' id='introduction'>Introduction</h2><br /> <br /> @@ -402,10 +404,17 @@ Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd exiting, signal 15 Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd shutdown succeeded </pre> <br /> -<span>All good :-) See you in the next post of this series!</span><br /> +<span>All good :-)</span><br /> +<br /> +<h2 style='display: inline' id='conclusion'>Conclusion</h2><br /> +<br /> +<span>I have the same UPS (but with a bit more capacity) for my main work setup, which powers my 28" screen, music equipment, etc. It has already been helpful a couple of times during power outages here, so I am sure that the smaller UPS for the F3s setup will be of great use.</span><br /> +<br /> +<span>See you in the next post of this series!</span><br /> <br /> <span>Other BSD related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts (You are currently reading this)</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> diff --git a/gemfeed/2025-02-08-random-weird-things-ii.html b/gemfeed/2025-02-08-random-weird-things-ii.html index 770e287a..03f2d8de 100644 --- a/gemfeed/2025-02-08-random-weird-things-ii.html +++ b/gemfeed/2025-02-08-random-weird-things-ii.html @@ -73,7 +73,7 @@ <br /> <h3 style='display: inline' id='13-go-functions-can-have-methods'>13. Go functions can have methods</h3><br /> <br /> -<span>Functions on struct types? Well, know. Functions on types like <span class='inlinecode'>int</span> and <span class='inlinecode'>string</span>? It's also known of, but a bit lesser. Functions on function types? That sounds a bit funky, but it's possible, too! For demonstration, have a look at this snippet:</span><br /> +<span>Functions on struct types? Well known. Functions on types like <span class='inlinecode'>int</span> and <span class='inlinecode'>string</span>? It's also known of, but a bit lesser. Functions on function types? That sounds a bit funky, but it's possible, too! For demonstration, have a look at this snippet:</span><br /> <br /> <!-- Generator: GNU source-highlight 3.1.9 by Lorenzo Bettini @@ -119,7 +119,7 @@ http://www.gnu.org/software/src-highlite --> <br /> <h3 style='display: inline' id='14--and-ss-are-treated-the-same'>14. ß and ss are treated the same</h3><br /> <br /> -<span>Know German? In German, the letter "sarp s" is written as ß. ß is treated the same as ss on macOS.</span><br /> +<span>Know German? In German, the letter "sharp s" is written as ß. ß is treated the same as ss on macOS.</span><br /> <br /> <span>On a case-insensitive file system like macOS, not only are uppercase and lowercase letters treated the same, but non-Latin characters like the German "ß" are also considered equivalent to their Latin counterparts (in this case, "ss").</span><br /> <br /> diff --git a/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html b/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html new file mode 100644 index 00000000..b7d665c9 --- /dev/null +++ b/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html @@ -0,0 +1,609 @@ +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> +<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> +<head> +<meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> +<title>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</title> +<link rel="shortcut icon" type="image/gif" href="/favicon.ico" /> +<link rel="stylesheet" href="../style.css" /> +<link rel="stylesheet" href="style-override.css" /> +</head> +<body> +<p class="header"> +<a href="https://foo.zone">Home</a> | <a href="https://codeberg.org/snonux/foo.zone/src/branch/content-md/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.md">Markdown</a> | <a href="gemini://foo.zone/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi">Gemini</a> +</p> +<h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-4-rocky-linux-bhyve-vms'>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</h1><br /> +<br /> +<span class='quote'>Published at 2025-04-04T23:21:01+03:00</span><br /> +<br /> +<span>This is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.</span><br /> +<br /> +<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> +<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> +<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)</a><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> +<br /> +<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> +<br /> +<ul> +<li><a href='#f3s-kubernetes-with-freebsd---part-4-rocky-linux-bhyve-vms'>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a></li> +<li>⇢ <a href='#introduction'>Introduction</a></li> +<li>⇢ <a href='#check-for-popcnt-cpu-support'>Check for <span class='inlinecode'>POPCNT</span> CPU support</a></li> +<li>⇢ <a href='#basic-bhyve-setup'>Basic Bhyve setup</a></li> +<li>⇢ <a href='#rocky-linux-vms'>Rocky Linux VMs</a></li> +<li>⇢ ⇢ <a href='#iso-download'>ISO download</a></li> +<li>⇢ ⇢ <a href='#vm-configuration'>VM configuration</a></li> +<li>⇢ ⇢ <a href='#vm-installation'>VM installation</a></li> +<li>⇢ ⇢ <a href='#increase-of-the-disk-image'>Increase of the disk image</a></li> +<li>⇢ ⇢ <a href='#connect-to-vnc'>Connect to VNC</a></li> +<li>⇢ <a href='#after-install'>After install</a></li> +<li>⇢ ⇢ <a href='#vm-auto-start-after-host-reboot'>VM auto-start after host reboot</a></li> +<li>⇢ ⇢ <a href='#static-ip-configuration'>Static IP configuration</a></li> +<li>⇢ ⇢ <a href='#permitting-root-login'>Permitting root login</a></li> +<li>⇢ ⇢ <a href='#install-latest-updates'>Install latest updates</a></li> +<li>⇢ <a href='#stress-testing-cpu'>Stress testing CPU</a></li> +<li>⇢ ⇢ <a href='#silly-freebsd-host-benchmark'>Silly FreeBSD host benchmark</a></li> +<li>⇢ ⇢ <a href='#silly-rocky-linux-vm--bhyve-benchmark'>Silly Rocky Linux VM @ Bhyve benchmark</a></li> +<li>⇢ ⇢ <a href='#silly-freebsd-vm--bhyve-benchmark'>Silly FreeBSD VM @ Bhyve benchmark</a></li> +<li>⇢ <a href='#benchmarking-with-ubench'>Benchmarking with <span class='inlinecode'>ubench</span></a></li> +<li>⇢ ⇢ <a href='#freebsd-host-ubench-benchmark'>FreeBSD host <span class='inlinecode'>ubench</span> benchmark</a></li> +<li>⇢ ⇢ <a href='#freebsd-vm--bhyve-ubench-benchmark'>FreeBSD VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</a></li> +<li>⇢ ⇢ <a href='#rocky-linux-vm--bhyve-ubench-benchmark'>Rocky Linux VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</a></li> +<li>⇢ <a href='#conclusion'>Conclusion</a></li> +</ul><br /> +<h2 style='display: inline' id='introduction'>Introduction</h2><br /> +<br /> +<span>In this blog post, we are going to install the Bhyve hypervisor.</span><br /> +<br /> +<span>The FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.</span><br /> +<br /> +<span>Bhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.</span><br /> +<br /> +<h2 style='display: inline' id='check-for-popcnt-cpu-support'>Check for <span class='inlinecode'>POPCNT</span> CPU support</h2><br /> +<br /> +<span>POPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.</span><br /> +<br /> +<span>To check for <span class='inlinecode'>POPCNT</span> support, run:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % dmesg | grep <font color="#808080">'Features2=.*POPCNT'</font> + Features2=<font color="#000000">0x7ffafbbf</font><SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG, + FMA,CX16,xTPR,PDCM,PCID,SSE4.<font color="#000000">1</font>,SSE4.<font color="#000000">2</font>,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE, + OSXSAVE,AVX,F16C,RDRAND> +</pre> +<br /> +<span>So it's there! All good.</span><br /> +<br /> +<h2 style='display: inline' id='basic-bhyve-setup'>Basic Bhyve setup</h2><br /> +<br /> +<span>For managing the Bhyve VMs, we are using <span class='inlinecode'>vm-bhyve</span>, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.</span><br /> +<br /> +<a class='textlink' href='https://github.com/churchers/vm-bhyve'>https://github.com/churchers/vm-bhyve</a><br /> +<br /> +<span>The following commands are executed on all three hosts <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, and <span class='inlinecode'>f2</span>, where <span class='inlinecode'>re0</span> is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas pkg install vm-bhyve bhyve-firmware +paul@f0:~ % doas sysrc vm_enable=YES +vm_enable: -> YES +paul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve +vm_dir: -> zfs:zroot/bhyve +paul@f0:~ % doas zfs create zroot/bhyve +paul@f0:~ % doas vm init +paul@f0:~ % doas vm switch create public +paul@f0:~ % doas vm switch add public re0 +</pre> +<br /> +<span>Bhyve stores all it's data in the <span class='inlinecode'>/bhyve</span> of the <span class='inlinecode'>zroot</span> ZFS pool:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % zfs list | grep bhyve +zroot/bhyve <font color="#000000">1</font>.74M 453G <font color="#000000">1</font>.74M /zroot/bhyve +</pre> +<br /> +<span>For convenience, we also create this symlink:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve + +</pre> +<br /> +<span>Now, Bhyve is ready to rumble, but no VMs are there yet:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas vm list +NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE +</pre> +<br /> +<h2 style='display: inline' id='rocky-linux-vms'>Rocky Linux VMs</h2><br /> +<br /> +<span>As guest VMs I decided to use Rocky Linux.</span><br /> +<br /> +<span>Using Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.</span><br /> +<br /> +<a class='textlink' href='https://rockylinux.org/'>https://rockylinux.org/</a><br /> +<br /> +<h3 style='display: inline' id='iso-download'>ISO download</h3><br /> +<br /> +<span>We're going to install the Rocky Linux from the latest minimal iso:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas vm iso \ + https://download.rockylinux.org/pub/rocky/<font color="#000000">9</font>/isos/x86_64/Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso +/zroot/bhyve/.iso/Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso <font color="#000000">1808</font> MB <font color="#000000">4780</font> kBps 06m28s +paul@f0:/bhyve % doas vm create rocky +</pre> +<h3 style='display: inline' id='vm-configuration'>VM configuration</h3><br /> +<br /> +<span>The default Bhyve VM configuration looks like this now:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:/bhyve/rocky % cat rocky.conf +loader=<font color="#808080">"bhyveload"</font> +cpu=<font color="#000000">1</font> +memory=256M +network0_type=<font color="#808080">"virtio-net"</font> +network0_switch=<font color="#808080">"public"</font> +disk0_type=<font color="#808080">"virtio-blk"</font> +disk0_name=<font color="#808080">"disk0.img"</font> +uuid=<font color="#808080">"1c4655ac-c828-11ef-a920-e8ff1ed71ca0"</font> +network0_mac=<font color="#808080">"58:9c:fc:0d:13:3f"</font> +</pre> +<br /> +<span>The <span class='inlinecode'>uuid</span> and the <span class='inlinecode'>network0_mac</span> differ for each of the three VMs.</span><br /> +<br /> +<span>But to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run <span class='inlinecode'>doas vm configure rocky</span> and modified it to:</span><br /> +<br /> +<pre> +guest="linux" +loader="uefi" +uefi_vars="yes" +cpu=4 +memory=14G +network0_type="virtio-net" +network0_switch="public" +disk0_type="virtio-blk" +disk0_name="disk0.img" +graphics="yes" +graphics_vga=io +uuid="1c45400b-c828-11ef-8871-e8ff1ed71cac" +network0_mac="58:9c:fc:0d:13:3f" +</pre> +<br /> +<h3 style='display: inline' id='vm-installation'>VM installation</h3><br /> +<br /> +<span>To start the installer from the downloaded ISO, we run:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas vm install rocky Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso +Starting rocky + * found guest <b><u><font color="#000000">in</font></u></b> /zroot/bhyve/rocky + * booting... + +paul@f0:/bhyve/rocky % doas vm list +NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE +rocky default uefi <font color="#000000">4</font> 14G <font color="#000000">0.0</font>.<font color="#000000">0.0</font>:<font color="#000000">5900</font> No Locked (f0.lan.buetow.org) + +paul@f0:/bhyve/rocky % doas sockstat -<font color="#000000">4</font> | grep <font color="#000000">5900</font> +root bhyve <font color="#000000">6079</font> <font color="#000000">8</font> tcp4 *:<font color="#000000">5900</font> *:* +</pre> +<br /> +<span>Port 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.</span><br /> +<br /> +<h3 style='display: inline' id='increase-of-the-disk-image'>Increase of the disk image</h3><br /> +<br /> +<span>By default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran <span class='inlinecode'>truncate</span> on the image file to enlarge them to 100G, and re-started the installation:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:/bhyve/rocky % doas vm stop rocky +paul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img +paul@f0:/bhyve/rocky % doas vm install rocky Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso +</pre> +<br /> +<h3 style='display: inline' id='connect-to-vnc'>Connect to VNC</h3><br /> +<br /> +<span>For the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were <span class='inlinecode'>vnc://f0:5900</span>, <span class='inlinecode'>vnc://f1:5900</span>, and <span class='inlinecode'>vnc://f0:5900</span>.</span><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-4/1.png'><img src='./f3s-kubernetes-with-freebsd-part-4/1.png' /></a><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-4/2.png'><img src='./f3s-kubernetes-with-freebsd-part-4/2.png' /></a><br /> +<br /> +<span>I primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.</span><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-4/3.png'><img src='./f3s-kubernetes-with-freebsd-part-4/3.png' /></a><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-4/4.png'><img src='./f3s-kubernetes-with-freebsd-part-4/4.png' /></a><br /> +<br /> +<h2 style='display: inline' id='after-install'>After install</h2><br /> +<br /> +<span>We perform the following steps for all 3 VMs. In the following, the examples are all executed on <span class='inlinecode'>f0</span> (the VM <span class='inlinecode'>r0</span> running on <span class='inlinecode'>f0</span>):</span><br /> +<br /> +<h3 style='display: inline' id='vm-auto-start-after-host-reboot'>VM auto-start after host reboot</h3><br /> +<br /> +<span>To automatically start the VM on the servers, we add the following to the <span class='inlinecode'>rc.conf</span> on the FreeBSD hosts:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf +vm_list=<font color="#808080">"rocky"</font> +vm_delay=<font color="#808080">"5"</font> +</pre> +<br /> +<span>The <span class='inlinecode'>vm_delay</span> isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a <span class='inlinecode'>Yes</span> indicator in the <span class='inlinecode'>AUTO</span> column.</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas vm list +NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE +rocky default uefi <font color="#000000">4</font> 14G <font color="#000000">0.0</font>.<font color="#000000">0.0</font>:<font color="#000000">5900</font> Yes [<font color="#000000">1</font>] Running (<font color="#000000">2063</font>) +</pre> +<br /> +<h3 style='display: inline' id='static-ip-configuration'>Static IP configuration</h3><br /> +<br /> +<span>After that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my <span class='inlinecode'>/etc/hosts</span> file:</span><br /> +<br /> +<pre> +192.168.1.130 f0 f0.lan f0.lan.buetow.org +192.168.1.131 f1 f1.lan f1.lan.buetow.org +192.168.1.132 f2 f2.lan f2.lan.buetow.org +</pre> +<br /> +<span>For the Rocky VMs, we add those to the FreeBSD host systems as well:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts +<font color="#000000">192.168</font>.<font color="#000000">1.120</font> r0 r0.lan r0.lan.buetow.org +<font color="#000000">192.168</font>.<font color="#000000">1.121</font> r1 r1.lan r1.lan.buetow.org +<font color="#000000">192.168</font>.<font color="#000000">1.122</font> r2 r2.lan r2.lan.buetow.org +END +</pre> +<br /> +<span>And we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address <font color="#000000">192.168</font>.<font color="#000000">1.120</font>/<font color="#000000">24</font> +[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway <font color="#000000">192.168</font>.<font color="#000000">1.1</font> +[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS <font color="#000000">192.168</font>.<font color="#000000">1.1</font> +[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual +[root@r0 ~] % dnmcli connection down enp0s5 +[root@r0 ~] % dnmcli connection up enp0s5 +[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org +[root@r0 ~] % cat <<END >>/etc/hosts +<font color="#000000">192.168</font>.<font color="#000000">1.120</font> r0 r0.lan r0.lan.buetow.org +<font color="#000000">192.168</font>.<font color="#000000">1.121</font> r1 r1.lan r1.lan.buetow.org +<font color="#000000">192.168</font>.<font color="#000000">1.122</font> r2 r2.lan r2.lan.buetow.org +END +</pre> +<br /> +<span>Whereas:</span><br /> +<br /> +<ul> +<li><span class='inlinecode'>192.168.1.120</span> is the IP of the VM itself (here: <span class='inlinecode'>r0.lan.buetow.org</span>)</li> +<li><span class='inlinecode'>192.168.1.1</span> is the address of my home router, which also does DNS.</li> +</ul><br /> +<h3 style='display: inline' id='permitting-root-login'>Permitting root login</h3><br /> +<br /> +<span>As these VMs aren't directly reachable via SSH from the internet, we enable <span class='inlinecode'>root</span> login by adding a line with <span class='inlinecode'>PermitRootLogin yes</span> to <span class='inlinecode'>/etc/sshd/sshd_config</span>.</span><br /> +<br /> +<span>Once done, we reboot the VM by running <span class='inlinecode'>reboot</span> inside the VM to test whether everything was configured and persisted correctly.</span><br /> +<br /> +<span>After reboot, I copied my public key from my Laptop to the 3 VMs:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>% <b><u><font color="#000000">for</font></u></b> i <b><u><font color="#000000">in</font></u></b> <font color="#000000">0</font> <font color="#000000">1</font> <font color="#000000">2</font>; <b><u><font color="#000000">do</font></u></b> ssh-copy-id root@r$i.lan.buetow.org; <b><u><font color="#000000">done</font></u></b> +</pre> +<br /> +<span>Then, I edited the <span class='inlinecode'>/etc/ssh/sshd_config</span> file again on all 3 VMs and configured <span class='inlinecode'>PasswordAuthentication no</span> to only allow SSH key authentication from now on.</span><br /> +<br /> +<h3 style='display: inline' id='install-latest-updates'>Install latest updates</h3><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~] % dnf update +[root@r0 ~] % reboot +</pre> +<br /> +<h2 style='display: inline' id='stress-testing-cpu'>Stress testing CPU</h2><br /> +<br /> +<span>The aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><b><u><font color="#000000">package</font></u></b> main + +<b><u><font color="#000000">import</font></u></b> <font color="#808080">"testing"</font> + +<b><u><font color="#000000">func</font></u></b> BenchmarkCPUSilly1(b *testing.B) { + <b><u><font color="#000000">for</font></u></b> i := <font color="#000000">0</font>; i < b.N; i++ { + _ = i * i + } +} + +<b><u><font color="#000000">func</font></u></b> BenchmarkCPUSilly2(b *testing.B) { + <b><u><font color="#000000">var</font></u></b> sillyResult <b><font color="#000000">float64</font></b> + <b><u><font color="#000000">for</font></u></b> i := <font color="#000000">0</font>; i < b.N; i++ { + sillyResult += <b><font color="#000000">float64</font></b>(i) + sillyResult *= <b><font color="#000000">float64</font></b>(i) + divisor := <b><font color="#000000">float64</font></b>(i) + <font color="#000000">1</font> + <b><u><font color="#000000">if</font></u></b> divisor > <font color="#000000">0</font> { + sillyResult /= divisor + } + } + _ = sillyResult <i><font color="silver">// to avoid compiler optimization</font></i> +} +</pre> +<br /> +<span>You can find the repository here:</span><br /> +<br /> +<a class='textlink' href='https://codeberg.org/snonux/sillybench'>https://codeberg.org/snonux/sillybench</a><br /> +<br /> +<h3 style='display: inline' id='silly-freebsd-host-benchmark'>Silly FreeBSD host benchmark</h3><br /> +<br /> +<span>To install it on FreeBSD, we run:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas pkg install git go +paul@f0:~ % mkdir ~/git && cd ~/git && \ + git clone https://codeberg.org/snonux/sillybench && \ + cd sillybench +</pre> +<br /> +<span>And to run it:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~/git/sillybench % go version +go version go1.<font color="#000000">24.1</font> freebsd/amd<font color="#000000">64</font> + +paul@f0:~/git/sillybench % go <b><u><font color="#000000">test</font></u></b> -bench=. +goos: freebsd +goarch: amd64 +pkg: codeberg.org/snonux/sillybench +cpu: Intel(R) N100 +BenchmarkCPUSilly1-<font color="#000000">4</font> <font color="#000000">1000000000</font> <font color="#000000">0.4022</font> ns/op +BenchmarkCPUSilly2-<font color="#000000">4</font> <font color="#000000">1000000000</font> <font color="#000000">0.4027</font> ns/op +PASS +ok codeberg.org/snonux/sillybench <font color="#000000">0</font>.891s +</pre> +<br /> +<h3 style='display: inline' id='silly-rocky-linux-vm--bhyve-benchmark'>Silly Rocky Linux VM @ Bhyve benchmark</h3><br /> +<br /> +<span>OK, let's compare this with the Rocky Linux VM running on Bhyve:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># dnf install golang git</font></i> +[root@r0 ~]<i><font color="silver"># mkdir ~/git && cd ~/git && \</font></i> + git clone https://codeberg.org/snonux/sillybench && \ + cd sillybench +</pre> +<br /> +<span>And to run it:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 sillybench]<i><font color="silver"># go version</font></i> +go version go1.<font color="#000000">22.9</font> (Red Hat <font color="#000000">1.22</font>.<font color="#000000">9</font>-<font color="#000000">2</font>.el9_5) linux/amd<font color="#000000">64</font> +[root@r0 sillybench]<i><font color="silver"># go test -bench=.</font></i> +goos: linux +goarch: amd64 +pkg: codeberg.org/snonux/sillybench +cpu: Intel(R) N100 +BenchmarkCPUSilly1-<font color="#000000">4</font> <font color="#000000">1000000000</font> <font color="#000000">0.4347</font> ns/op +BenchmarkCPUSilly2-<font color="#000000">4</font> <font color="#000000">1000000000</font> <font color="#000000">0.4345</font> ns/op +</pre> +<span>The Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.</span><br /> +<br /> +<h3 style='display: inline' id='silly-freebsd-vm--bhyve-benchmark'>Silly FreeBSD VM @ Bhyve benchmark</h3><br /> +<br /> +<span>But as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.</span><br /> +<br /> +<span>But here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>root@freebsd:~/git/sillybench <i><font color="silver"># go test -bench=.</font></i> +goos: freebsd +goarch: amd64 +pkg: codeberg.org/snonux/sillybench +cpu: Intel(R) N100 +BenchmarkCPUSilly1 <font color="#000000">1000000000</font> <font color="#000000">0.4273</font> ns/op +BenchmarkCPUSilly2 <font color="#000000">1000000000</font> <font color="#000000">0.4286</font> ns/op +PASS +ok codeberg.org/snonux/sillybench <font color="#000000">0</font>.949s +</pre> +<br /> +<span>It's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!</span><br /> +<br /> +<h2 style='display: inline' id='benchmarking-with-ubench'>Benchmarking with <span class='inlinecode'>ubench</span></h2><br /> +<br /> +<span>Let's run another, more sophisticated benchmark using <span class='inlinecode'>ubench</span>, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running <span class='inlinecode'>doas pkg install ubench</span>. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with <span class='inlinecode'>-s</span>, and then let it run at full speed in the second run.</span><br /> +<br /> +<h3 style='display: inline' id='freebsd-host-ubench-benchmark'>FreeBSD host <span class='inlinecode'>ubench</span> benchmark</h3><br /> +<br /> +<span>Single CPU:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas ubench -s <font color="#000000">1</font> +Unix Benchmark Utility v.<font color="#000000">0.3</font> +Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc. +Author: Sergei Viznyuk <sv@phystech.com> +http://www.phystech.com/download/ubench.html +FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64 +Ubench Single CPU: <font color="#000000">671010</font> (<font color="#000000">0</font>.40s) +Ubench Single MEM: <font color="#000000">1705237</font> (<font color="#000000">0</font>.48s) +----------------------------------- +Ubench Single AVG: <font color="#000000">1188123</font> + +</pre> +<br /> +<span>All CPUs (with all Bhyve VMs stopped):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas ubench +Unix Benchmark Utility v.<font color="#000000">0.3</font> +Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc. +Author: Sergei Viznyuk <sv@phystech.com> +http://www.phystech.com/download/ubench.html +FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64 +Ubench CPU: <font color="#000000">2660220</font> +Ubench MEM: <font color="#000000">3095182</font> +-------------------- +Ubench AVG: <font color="#000000">2877701</font> +</pre> +<br /> +<h3 style='display: inline' id='freebsd-vm--bhyve-ubench-benchmark'>FreeBSD VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</h3><br /> +<br /> +<span>Single CPU:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>root@freebsd:~ <i><font color="silver"># ubench -s 1</font></i> +Unix Benchmark Utility v.<font color="#000000">0.3</font> +Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc. +Author: Sergei Viznyuk <sv@phystech.com> +http://www.phystech.com/download/ubench.html +FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64 +Ubench Single CPU: <font color="#000000">672792</font> (<font color="#000000">0</font>.40s) +Ubench Single MEM: <font color="#000000">852757</font> (<font color="#000000">0</font>.48s) +----------------------------------- +Ubench Single AVG: <font color="#000000">762774</font> +</pre> +<br /> +<span>All CPUs:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>root@freebsd:~ <i><font color="silver"># ubench</font></i> +Unix Benchmark Utility v.<font color="#000000">0.3</font> +Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc. +Author: Sergei Viznyuk <sv@phystech.com> +http://www.phystech.com/download/ubench.html +FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64 +Ubench CPU: <font color="#000000">2652857</font> +swap_pager: out of swap space +swp_pager_getswapspace(<font color="#000000">27</font>): failed +swap_pager: out of swap space +swp_pager_getswapspace(<font color="#000000">18</font>): failed +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">43</font> freebsd kernel: pid <font color="#000000">862</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +swp_pager_getswapspace(<font color="#000000">6</font>): failed +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">46</font> freebsd kernel: pid <font color="#000000">863</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">47</font> freebsd kernel: pid <font color="#000000">864</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">48</font> freebsd kernel: pid <font color="#000000">865</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">49</font> freebsd kernel: pid <font color="#000000">861</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">51</font> freebsd kernel: pid <font color="#000000">839</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +</pre> +<br /> +<span>The multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.</span><br /> +<br /> +<span>Also, during the benchmark, I noticed the <span class='inlinecode'>bhyve</span> process on the host was constantly using 399% of the CPU (all 4 CPUs).</span><br /> +<br /> +<pre> + PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND + 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve +</pre> +<br /> +<span>Overall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?</span><br /> +<br /> +<h3 style='display: inline' id='rocky-linux-vm--bhyve-ubench-benchmark'>Rocky Linux VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</h3><br /> +<br /> +<span>Unfortunately, I wasn't able to find <span class='inlinecode'>ubench</span> in any of the Rocky Linux repositories. So, I skipped this test.</span><br /> +<br /> +<h2 style='display: inline' id='conclusion'>Conclusion</h2><br /> +<br /> +<span>Having Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.</span><br /> +<br /> +<span>Future uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?</span><br /> +<br /> +<span>This flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.</span><br /> +<br /> +<span>See you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.</span><br /> +<br /> +<span>Other *BSD-related posts:</span><br /> +<br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)</a><br /> +<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> +<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> +<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> +<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br /> +<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br /> +<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br /> +<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let's Encrypt with OpenBSD and Rex</a><br /> +<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br /> +<br /> +<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br /> +<br /> +<a class='textlink' href='../'>Back to the main site</a><br /> +<p class="footer"> +Generated with <a href="https://codeberg.org/snonux/gemtexter">Gemtexter 3.0.1-develop</a> | +served by <a href="https://www.OpenBSD.org">OpenBSD</a>/<a href="https://man.openbsd.org/relayd.8">relayd(8)</a>+<a href="https://man.openbsd.org/httpd.8">httpd(8)</a> | +<a href="https://foo.zone/site-mirrors.html">Site Mirrors</a> +</p> +</body> +</html> diff --git a/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.html b/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.html deleted file mode 100644 index 133ba1d6..00000000 --- a/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.html +++ /dev/null @@ -1,345 +0,0 @@ -<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> -<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> -<head> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> -<title>f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4</title> -<link rel="shortcut icon" type="image/gif" href="/favicon.ico" /> -<link rel="stylesheet" href="../style.css" /> -<link rel="stylesheet" href="style-override.css" /> -</head> -<body> -<p class="header"> -<a href="https://foo.zone">Home</a> | <a href="https://codeberg.org/snonux/foo.zone/src/branch/content-md/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.md">Markdown</a> | <a href="gemini://foo.zone/gemfeed/DRAFT-f3s-kubernetes-with-freebsd-part-4.gmi">Gemini</a> -</p> -<h1 style='display: inline' id='f3s-kubernetes-with-freebsd---rocky-linux-bhyve-vms---part-4'>f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4</h1><br /> -<br /> -<span>This is the thourth blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution we will use on FreeBSD-based physical machines.</span><br /> -<br /> -<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> -<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> -<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> -<br /> -<a href='./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-frhyveeebsd-part-1/f3slogo.png' /></a><br /> -<br /> -<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> -<br /> -<ul> -<li><a href='#f3s-kubernetes-with-freebsd---rocky-linux-bhyve-vms---part-4'>f3s: Kubernetes with FreeBSD - Rocky Linux Bhyve VMs - Part 4</a></li> -<li>⇢ <a href='#introduction'>Introduction</a></li> -<li>⇢ <a href='#check-for-popcnt-cpu-support'>Check for <span class='inlinecode'>POPCNT</span> CPU support</a></li> -<li>⇢ <a href='#basic-bhyve-setup'>Basic Bhyve setup</a></li> -<li>⇢ <a href='#rocky-linux-vms'>Rocky Linux VMs</a></li> -<li>⇢ ⇢ <a href='#iso-download'>ISO download</a></li> -<li>⇢ ⇢ <a href='#vm-configuration'>VM configuration</a></li> -<li>⇢ ⇢ <a href='#vm-installation'>VM installation</a></li> -<li>⇢ ⇢ <a href='#increase-of-the-disk-image'>Increase of the disk image</a></li> -<li>⇢ ⇢ <a href='#connect-to-vpn'>Connect to VPN</a></li> -<li>⇢ <a href='#after-install'>After install</a></li> -<li>⇢ ⇢ <a href='#vm-auto-start-after-host-reboot'>VM auto-start after host reboot</a></li> -<li>⇢ ⇢ <a href='#static-ip-configuration'>Static IP configuration</a></li> -<li>⇢ ⇢ <a href='#permitting-root-login'>Permitting root login</a></li> -<li>⇢ ⇢ <a href='#install-latest-updates'>Install latest updates</a></li> -</ul><br /> -<h2 style='display: inline' id='introduction'>Introduction</h2><br /> -<br /> -<span>In this blog post, we are going to install the Bhyve hypervisor.</span><br /> -<br /> -<span>The FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It is designed to be efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.</span><br /> -<br /> -<span>Bhyve supports running a variety of guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which later on in this series will be used to run k3s.</span><br /> -<br /> -<h2 style='display: inline' id='check-for-popcnt-cpu-support'>Check for <span class='inlinecode'>POPCNT</span> CPU support</h2><br /> -<br /> -<span>POPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. In terms of CPU virtualization and Bhyve support for the POPCNT instruction is important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines to for better performance. Without POPCNT support, some applications might not run, or they might perform suboptimally in virtualized environments.</span><br /> -<br /> -<span>To check for <span class='inlinecode'>POPCNT</span> support, I run:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:~ % dmesg | grep <font color="#808080">'Features2=.*POPCNT'</font> - Features2=<font color="#000000">0x7ffafbbf</font><SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG, - FMA,CX16,xTPR,PDCM,PCID,SSE4.<font color="#000000">1</font>,SSE4.<font color="#000000">2</font>,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE, - OSXSAVE,AVX,F16C,RDRAND> -</pre> -<br /> -<span>So it's there! All good.</span><br /> -<br /> -<h2 style='display: inline' id='basic-bhyve-setup'>Basic Bhyve setup</h2><br /> -<br /> -<span>For the management of the Bhyve VMs, we are using <span class='inlinecode'>vm-bhyve</span>, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of the overhead. We also install the required package to make Bhyve work with the UEFI firmware.</span><br /> -<br /> -<a class='textlink' href='https://github.com/churchers/vm-bhyve'>https://github.com/churchers/vm-bhyve</a><br /> -<br /> -<span>The following commands are executed on all three hosts <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, and <span class='inlinecode'>f2</span>, where <span class='inlinecode'>re0</span> is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:~ % doas pkg install vm-bhyve bhyve-firmware -paul@f0:~ % doas sysrc vm_enable=YES -vm_enable: -> YES -paul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve -vm_dir: -> zfs:zroot/bhyve -paul@f0:~ % doas zfs create zroot/bhyve -paul@f0:~ % doas vm init -paul@f0:~ % doas vm switch create public -paul@f0:~ % doas vm switch add public re0 -</pre> -<br /> -<span>Bhyve stores all it's data in the <span class='inlinecode'>/bhyve</span> of the <span class='inlinecode'>zroot</span> ZFS pool:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:~ % zfs list | grep bhyve -zroot/bhyve <font color="#000000">1</font>.74M 453G <font color="#000000">1</font>.74M /zroot/bhyve -</pre> -<br /> -<span>For convenience, we also create this symlink:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve - -</pre> -<br /> -<span>Now, Bhyve is ready to rumble, but no VMs are there yet:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:~ % doas vm list -NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE -</pre> -<br /> -<h2 style='display: inline' id='rocky-linux-vms'>Rocky Linux VMs</h2><br /> -<br /> -<h3 style='display: inline' id='iso-download'>ISO download</h3><br /> -<br /> -<span>We're going to install the Rocky Linux from the latest minimal iso:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:~ % doas vm iso \ - https://download.rockylinux.org/pub/rocky/<font color="#000000">9</font>/isos/x86_64/Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso -/zroot/bhyve/.iso/Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso <font color="#000000">1808</font> MB <font color="#000000">4780</font> kBps 06m28s -paul@f0:/bhyve % doas vm create rocky -</pre> -<h3 style='display: inline' id='vm-configuration'>VM configuration</h3><br /> -<br /> -<span>The default configuration looks like this now:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:/bhyve/rocky % cat rocky.conf -loader=<font color="#808080">"bhyveload"</font> -cpu=<font color="#000000">1</font> -memory=256M -network0_type=<font color="#808080">"virtio-net"</font> -network0_switch=<font color="#808080">"public"</font> -disk0_type=<font color="#808080">"virtio-blk"</font> -disk0_name=<font color="#808080">"disk0.img"</font> -uuid=<font color="#808080">"1c4655ac-c828-11ef-a920-e8ff1ed71ca0"</font> -network0_mac=<font color="#808080">"58:9c:fc:0d:13:3f"</font> -</pre> -<br /> -<span>Whereas the <span class='inlinecode'>uuid</span> and the <span class='inlinecode'>network0_mac</span> differ on each of the 3 hosts.</span><br /> -<br /> -<span>but in order to make Rocky Linux boot it (plus some other adjustments, e.g. as I am intending to run the majority of the workload in the k3s cluster running on those linux VMs, I give them beefy specs like 4 CPU cores and 14GB RAM), I run <span class='inlinecode'>doas vm configure rocky</span> and modified it to:</span><br /> -<br /> -<pre> -guest="linux" -loader="uefi" -uefi_vars="yes" -cpu=4 -memory=14G -network0_type="virtio-net" -network0_switch="public" -disk0_type="virtio-blk" -disk0_name="disk0.img" -graphics="yes" -graphics_vga=io -uuid="1c45400b-c828-11ef-8871-e8ff1ed71cac" -network0_mac="58:9c:fc:0d:13:3f" -</pre> -<br /> -<h3 style='display: inline' id='vm-installation'>VM installation</h3><br /> -<br /> -<span>To start the installer from the downloaded ISO, I run:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:~ % doas vm install rocky Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso -Starting rocky - * found guest <b><u><font color="#000000">in</font></u></b> /zroot/bhyve/rocky - * booting... - -paul@f0:/bhyve/rocky % doas vm list -NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE -rocky default uefi <font color="#000000">4</font> 14G <font color="#000000">0.0</font>.<font color="#000000">0.0</font>:<font color="#000000">5900</font> No Locked (f0.lan.buetow.org) - -paul@f0:/bhyve/rocky % doas sockstat -<font color="#000000">4</font> | grep <font color="#000000">5900</font> -root bhyve <font color="#000000">6079</font> <font color="#000000">8</font> tcp4 *:<font color="#000000">5900</font> *:* -</pre> -<br /> -<span>Port 5900 now also opened for VNC connections, so I connected to it with a VNC client and run through the installation dialogs. I'm sure this could be done unattended or more automated, there are only 3 VMs to install, and the automation doesn't seem worth it as we are doing it only once in a year or less often.</span><br /> -<br /> -<h3 style='display: inline' id='increase-of-the-disk-image'>Increase of the disk image</h3><br /> -<br /> -<span>By default the VMs disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again and run <span class='inlinecode'>truncate</span> on the image file to enlarge them to 100G, and re-started the installation:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:/bhyve/rocky % doas vm stop rocky -paul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img -paul@f0:/bhyve/rocky % doas vm install rocky Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso -</pre> -<br /> -<h3 style='display: inline' id='connect-to-vpn'>Connect to VPN</h3><br /> -<br /> -<span>For the installation, I opened the VPN client on my Fedora laptop (GNOME comes with a simple VPN client) and ran through the base installation for each of the VMs manually. Again, I am sure this could have been automated a bit more, but there were just 3 VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were: <span class='inlinecode'>vnc://f0:5900</span>, <span class='inlinecode'>vnc://f1:5900</span>, and <span class='inlinecode'>vnc://f0:5900</span>.</span><br /> -<br /> -<span>I mostly selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.</span><br /> -<br /> -<h2 style='display: inline' id='after-install'>After install</h2><br /> -<br /> -<span>I performed the following steps for all 3 VMs. In the following, the examples are all executed on <span class='inlinecode'>f0</span> (bzw the VM <span class='inlinecode'>r0</span> running on <span class='inlinecode'>f0</span>):</span><br /> -<br /> -<h3 style='display: inline' id='vm-auto-start-after-host-reboot'>VM auto-start after host reboot</h3><br /> -<br /> -<span>To automatically start the VM on the servers I added the following to the <span class='inlinecode'>rc.conf</span> on the FreeBSD hosts:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf -vm_list=<font color="#808080">"rocky"</font> -vm_delay=<font color="#808080">"5"</font> -</pre> -<br /> -<span>The <span class='inlinecode'>vm_delay</span> isn't really required. It is used to wait 5 seconds before starting each VM, but as of now, there is only one VM per host. Maybe later, when there are more, this will be useful to have. After adding, there's now a <span class='inlinecode'>Yes</span> indicator in the <span class='inlinecode'>AUTO</span> column.</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:~ % doas vm list -NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE -rocky default uefi <font color="#000000">4</font> 14G <font color="#000000">0.0</font>.<font color="#000000">0.0</font>:<font color="#000000">5900</font> Yes [<font color="#000000">1</font>] Running (<font color="#000000">2063</font>) -</pre> -<br /> -<h3 style='display: inline' id='static-ip-configuration'>Static IP configuration</h3><br /> -<br /> -<span>After that, I changed the network configuration of the VMs to be static (from DHCP) here. As per previous post of this series, the 3 FreeBSD hosts were already in my <span class='inlinecode'>/etc/hosts</span> file:</span><br /> -<br /> -<pre> -192.168.1.130 f0 f0.lan f0.lan.buetow.org -192.168.1.131 f1 f1.lan f1.lan.buetow.org -192.168.1.132 f2 f2.lan f2.lan.buetow.org -</pre> -<br /> -<span>For the Rocky VMs I added those to the FreeBSD hosts systems as well:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>paul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts -<font color="#000000">192.168</font>.<font color="#000000">1.120</font> r0 r0.lan r0.lan.buetow.org -<font color="#000000">192.168</font>.<font color="#000000">1.121</font> r1 r1.lan r1.lan.buetow.org -<font color="#000000">192.168</font>.<font color="#000000">1.122</font> r2 r2.lan r2.lan.buetow.org -END -</pre> -<br /> -<span>and configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address <font color="#000000">192.168</font>.<font color="#000000">1.120</font>/<font color="#000000">24</font> -[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway <font color="#000000">192.168</font>.<font color="#000000">1.1</font> -[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.dns <font color="#000000">192.168</font>.<font color="#000000">1.1</font> -[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual -[root@r0 ~] % dnmcli connection down enp0s5 -[root@r0 ~] % dnmcli connection up enp0s5 -[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org -[root@r0 ~] % cat <<END >>/etc/hosts -<font color="#000000">192.168</font>.<font color="#000000">1.120</font> r0 r0.lan r0.lan.buetow.org -<font color="#000000">192.168</font>.<font color="#000000">1.121</font> r1 r1.lan r1.lan.buetow.org -<font color="#000000">192.168</font>.<font color="#000000">1.122</font> r2 r2.lan r2.lan.buetow.org -END -</pre> -<br /> -<span>Whereas:</span><br /> -<br /> -<ul> -<li><span class='inlinecode'>192.168.1.120</span> is the IP of the VM itself (here: <span class='inlinecode'>r0.lan.buetow.org</span>)</li> -<li><span class='inlinecode'>192.168.1.1</span> is the address of my home router, which also does DNS.</li> -</ul><br /> -<h3 style='display: inline' id='permitting-root-login'>Permitting root login</h3><br /> -<br /> -<span>As these VMs arent directly reachable via SSH from the internet, I enabled <span class='inlinecode'>root</span> login by adding a line with <span class='inlinecode'>PermitRootLogin yes</span> to <span class='inlinecode'>/etc/sshd/sshd_config</span>.</span><br /> -<br /> -<span>Once done, I rebooted the VM by running <span class='inlinecode'>reboot</span> inside of the vm to test whether everything was configured and persisted correctly.</span><br /> -<br /> -<span>After reboot, I copied my public key from my Laptop to the 3 VMs:</span><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>% <b><u><font color="#000000">for</font></u></b> i <b><u><font color="#000000">in</font></u></b> <font color="#000000">0</font> <font color="#000000">1</font> <font color="#000000">2</font>; <b><u><font color="#000000">do</font></u></b> ssh-copy-id root@r$i.lan.buetow.org; <b><u><font color="#000000">done</font></u></b> -</pre> -<br /> -<span>And then I edited the <span class='inlinecode'>/etc/ssh/sshd_config</span> file again on all 3 VMs and configured <span class='inlinecode'>PasswordAuthentication no</span>, to only allow SSH key authentication from now on.</span><br /> -<br /> -<h3 style='display: inline' id='install-latest-updates'>Install latest updates</h3><br /> -<br /> -<!-- Generator: GNU source-highlight 3.1.9 -by Lorenzo Bettini -http://www.lorenzobettini.it -http://www.gnu.org/software/src-highlite --> -<pre>[root@r0 ~] % dnf update -[root@r0 ~] % dreboot -</pre> -<br /> -<span>CPU STRESS TESTER VM VS NOT VM</span><br /> -<br /> -<span>Other *BSD-related posts:</span><br /> -<br /> -<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> -<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> -<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> -<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br /> -<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br /> -<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br /> -<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let's Encrypt with OpenBSD and Rex</a><br /> -<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br /> -<br /> -<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br /> -<br /> -<a class='textlink' href='../'>Back to the main site</a><br /> -<p class="footer"> -Generated with <a href="https://codeberg.org/snonux/gemtexter">Gemtexter 3.0.1-develop</a> | -served by <a href="https://www.OpenBSD.org">OpenBSD</a>/<a href="https://man.openbsd.org/relayd.8">relayd(8)</a>+<a href="https://man.openbsd.org/httpd.8">httpd(8)</a> | -<a href="https://foo.zone/site-mirrors.html">Site Mirrors</a> -</p> -</body> -</html> diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml index 0dc19b96..44422104 100644 --- a/gemfeed/atom.xml +++ b/gemfeed/atom.xml @@ -1,12 +1,614 @@ <?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom"> - <updated>2025-03-05T19:58:06+02:00</updated> + <updated>2025-04-04T23:21:02+03:00</updated> <title>foo.zone feed</title> <subtitle>To be in the .zone!</subtitle> <link href="https://foo.zone/gemfeed/atom.xml" rel="self" /> <link href="https://foo.zone/" /> <id>https://foo.zone/</id> <entry> + <title>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</title> + <link href="https://foo.zone/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html" /> + <id>https://foo.zone/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html</id> + <updated>2025-04-04T23:21:01+03:00</updated> + <author> + <name>Paul Buetow aka snonux</name> + <email>paul@dev.buetow.org</email> + </author> + <summary>This is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.</summary> + <content type="xhtml"> + <div xmlns="http://www.w3.org/1999/xhtml"> + <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-4-rocky-linux-bhyve-vms'>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</h1><br /> +<br /> +<span>This is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.</span><br /> +<br /> +<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> +<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> +<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)</a><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> +<br /> +<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> +<br /> +<ul> +<li><a href='#f3s-kubernetes-with-freebsd---part-4-rocky-linux-bhyve-vms'>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a></li> +<li>⇢ <a href='#introduction'>Introduction</a></li> +<li>⇢ <a href='#check-for-popcnt-cpu-support'>Check for <span class='inlinecode'>POPCNT</span> CPU support</a></li> +<li>⇢ <a href='#basic-bhyve-setup'>Basic Bhyve setup</a></li> +<li>⇢ <a href='#rocky-linux-vms'>Rocky Linux VMs</a></li> +<li>⇢ ⇢ <a href='#iso-download'>ISO download</a></li> +<li>⇢ ⇢ <a href='#vm-configuration'>VM configuration</a></li> +<li>⇢ ⇢ <a href='#vm-installation'>VM installation</a></li> +<li>⇢ ⇢ <a href='#increase-of-the-disk-image'>Increase of the disk image</a></li> +<li>⇢ ⇢ <a href='#connect-to-vnc'>Connect to VNC</a></li> +<li>⇢ <a href='#after-install'>After install</a></li> +<li>⇢ ⇢ <a href='#vm-auto-start-after-host-reboot'>VM auto-start after host reboot</a></li> +<li>⇢ ⇢ <a href='#static-ip-configuration'>Static IP configuration</a></li> +<li>⇢ ⇢ <a href='#permitting-root-login'>Permitting root login</a></li> +<li>⇢ ⇢ <a href='#install-latest-updates'>Install latest updates</a></li> +<li>⇢ <a href='#stress-testing-cpu'>Stress testing CPU</a></li> +<li>⇢ ⇢ <a href='#silly-freebsd-host-benchmark'>Silly FreeBSD host benchmark</a></li> +<li>⇢ ⇢ <a href='#silly-rocky-linux-vm--bhyve-benchmark'>Silly Rocky Linux VM @ Bhyve benchmark</a></li> +<li>⇢ ⇢ <a href='#silly-freebsd-vm--bhyve-benchmark'>Silly FreeBSD VM @ Bhyve benchmark</a></li> +<li>⇢ <a href='#benchmarking-with-ubench'>Benchmarking with <span class='inlinecode'>ubench</span></a></li> +<li>⇢ ⇢ <a href='#freebsd-host-ubench-benchmark'>FreeBSD host <span class='inlinecode'>ubench</span> benchmark</a></li> +<li>⇢ ⇢ <a href='#freebsd-vm--bhyve-ubench-benchmark'>FreeBSD VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</a></li> +<li>⇢ ⇢ <a href='#rocky-linux-vm--bhyve-ubench-benchmark'>Rocky Linux VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</a></li> +<li>⇢ <a href='#conclusion'>Conclusion</a></li> +</ul><br /> +<h2 style='display: inline' id='introduction'>Introduction</h2><br /> +<br /> +<span>In this blog post, we are going to install the Bhyve hypervisor.</span><br /> +<br /> +<span>The FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.</span><br /> +<br /> +<span>Bhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.</span><br /> +<br /> +<h2 style='display: inline' id='check-for-popcnt-cpu-support'>Check for <span class='inlinecode'>POPCNT</span> CPU support</h2><br /> +<br /> +<span>POPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.</span><br /> +<br /> +<span>To check for <span class='inlinecode'>POPCNT</span> support, run:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % dmesg | grep <font color="#808080">'Features2=.*POPCNT'</font> + Features2=<font color="#000000">0x7ffafbbf</font><SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG, + FMA,CX16,xTPR,PDCM,PCID,SSE4.<font color="#000000">1</font>,SSE4.<font color="#000000">2</font>,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE, + OSXSAVE,AVX,F16C,RDRAND> +</pre> +<br /> +<span>So it's there! All good.</span><br /> +<br /> +<h2 style='display: inline' id='basic-bhyve-setup'>Basic Bhyve setup</h2><br /> +<br /> +<span>For managing the Bhyve VMs, we are using <span class='inlinecode'>vm-bhyve</span>, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.</span><br /> +<br /> +<a class='textlink' href='https://github.com/churchers/vm-bhyve'>https://github.com/churchers/vm-bhyve</a><br /> +<br /> +<span>The following commands are executed on all three hosts <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, and <span class='inlinecode'>f2</span>, where <span class='inlinecode'>re0</span> is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas pkg install vm-bhyve bhyve-firmware +paul@f0:~ % doas sysrc vm_enable=YES +vm_enable: -> YES +paul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve +vm_dir: -> zfs:zroot/bhyve +paul@f0:~ % doas zfs create zroot/bhyve +paul@f0:~ % doas vm init +paul@f0:~ % doas vm switch create public +paul@f0:~ % doas vm switch add public re0 +</pre> +<br /> +<span>Bhyve stores all it's data in the <span class='inlinecode'>/bhyve</span> of the <span class='inlinecode'>zroot</span> ZFS pool:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % zfs list | grep bhyve +zroot/bhyve <font color="#000000">1</font>.74M 453G <font color="#000000">1</font>.74M /zroot/bhyve +</pre> +<br /> +<span>For convenience, we also create this symlink:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve + +</pre> +<br /> +<span>Now, Bhyve is ready to rumble, but no VMs are there yet:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas vm list +NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE +</pre> +<br /> +<h2 style='display: inline' id='rocky-linux-vms'>Rocky Linux VMs</h2><br /> +<br /> +<span>As guest VMs I decided to use Rocky Linux.</span><br /> +<br /> +<span>Using Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.</span><br /> +<br /> +<a class='textlink' href='https://rockylinux.org/'>https://rockylinux.org/</a><br /> +<br /> +<h3 style='display: inline' id='iso-download'>ISO download</h3><br /> +<br /> +<span>We're going to install the Rocky Linux from the latest minimal iso:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas vm iso \ + https://download.rockylinux.org/pub/rocky/<font color="#000000">9</font>/isos/x86_64/Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso +/zroot/bhyve/.iso/Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso <font color="#000000">1808</font> MB <font color="#000000">4780</font> kBps 06m28s +paul@f0:/bhyve % doas vm create rocky +</pre> +<h3 style='display: inline' id='vm-configuration'>VM configuration</h3><br /> +<br /> +<span>The default Bhyve VM configuration looks like this now:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:/bhyve/rocky % cat rocky.conf +loader=<font color="#808080">"bhyveload"</font> +cpu=<font color="#000000">1</font> +memory=256M +network0_type=<font color="#808080">"virtio-net"</font> +network0_switch=<font color="#808080">"public"</font> +disk0_type=<font color="#808080">"virtio-blk"</font> +disk0_name=<font color="#808080">"disk0.img"</font> +uuid=<font color="#808080">"1c4655ac-c828-11ef-a920-e8ff1ed71ca0"</font> +network0_mac=<font color="#808080">"58:9c:fc:0d:13:3f"</font> +</pre> +<br /> +<span>The <span class='inlinecode'>uuid</span> and the <span class='inlinecode'>network0_mac</span> differ for each of the three VMs.</span><br /> +<br /> +<span>But to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run <span class='inlinecode'>doas vm configure rocky</span> and modified it to:</span><br /> +<br /> +<pre> +guest="linux" +loader="uefi" +uefi_vars="yes" +cpu=4 +memory=14G +network0_type="virtio-net" +network0_switch="public" +disk0_type="virtio-blk" +disk0_name="disk0.img" +graphics="yes" +graphics_vga=io +uuid="1c45400b-c828-11ef-8871-e8ff1ed71cac" +network0_mac="58:9c:fc:0d:13:3f" +</pre> +<br /> +<h3 style='display: inline' id='vm-installation'>VM installation</h3><br /> +<br /> +<span>To start the installer from the downloaded ISO, we run:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas vm install rocky Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso +Starting rocky + * found guest <b><u><font color="#000000">in</font></u></b> /zroot/bhyve/rocky + * booting... + +paul@f0:/bhyve/rocky % doas vm list +NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE +rocky default uefi <font color="#000000">4</font> 14G <font color="#000000">0.0</font>.<font color="#000000">0.0</font>:<font color="#000000">5900</font> No Locked (f0.lan.buetow.org) + +paul@f0:/bhyve/rocky % doas sockstat -<font color="#000000">4</font> | grep <font color="#000000">5900</font> +root bhyve <font color="#000000">6079</font> <font color="#000000">8</font> tcp4 *:<font color="#000000">5900</font> *:* +</pre> +<br /> +<span>Port 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.</span><br /> +<br /> +<h3 style='display: inline' id='increase-of-the-disk-image'>Increase of the disk image</h3><br /> +<br /> +<span>By default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran <span class='inlinecode'>truncate</span> on the image file to enlarge them to 100G, and re-started the installation:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:/bhyve/rocky % doas vm stop rocky +paul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img +paul@f0:/bhyve/rocky % doas vm install rocky Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso +</pre> +<br /> +<h3 style='display: inline' id='connect-to-vnc'>Connect to VNC</h3><br /> +<br /> +<span>For the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were <span class='inlinecode'>vnc://f0:5900</span>, <span class='inlinecode'>vnc://f1:5900</span>, and <span class='inlinecode'>vnc://f0:5900</span>.</span><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-4/1.png'><img src='./f3s-kubernetes-with-freebsd-part-4/1.png' /></a><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-4/2.png'><img src='./f3s-kubernetes-with-freebsd-part-4/2.png' /></a><br /> +<br /> +<span>I primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.</span><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-4/3.png'><img src='./f3s-kubernetes-with-freebsd-part-4/3.png' /></a><br /> +<br /> +<a href='./f3s-kubernetes-with-freebsd-part-4/4.png'><img src='./f3s-kubernetes-with-freebsd-part-4/4.png' /></a><br /> +<br /> +<h2 style='display: inline' id='after-install'>After install</h2><br /> +<br /> +<span>We perform the following steps for all 3 VMs. In the following, the examples are all executed on <span class='inlinecode'>f0</span> (the VM <span class='inlinecode'>r0</span> running on <span class='inlinecode'>f0</span>):</span><br /> +<br /> +<h3 style='display: inline' id='vm-auto-start-after-host-reboot'>VM auto-start after host reboot</h3><br /> +<br /> +<span>To automatically start the VM on the servers, we add the following to the <span class='inlinecode'>rc.conf</span> on the FreeBSD hosts:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf +vm_list=<font color="#808080">"rocky"</font> +vm_delay=<font color="#808080">"5"</font> +</pre> +<br /> +<span>The <span class='inlinecode'>vm_delay</span> isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a <span class='inlinecode'>Yes</span> indicator in the <span class='inlinecode'>AUTO</span> column.</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas vm list +NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE +rocky default uefi <font color="#000000">4</font> 14G <font color="#000000">0.0</font>.<font color="#000000">0.0</font>:<font color="#000000">5900</font> Yes [<font color="#000000">1</font>] Running (<font color="#000000">2063</font>) +</pre> +<br /> +<h3 style='display: inline' id='static-ip-configuration'>Static IP configuration</h3><br /> +<br /> +<span>After that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my <span class='inlinecode'>/etc/hosts</span> file:</span><br /> +<br /> +<pre> +192.168.1.130 f0 f0.lan f0.lan.buetow.org +192.168.1.131 f1 f1.lan f1.lan.buetow.org +192.168.1.132 f2 f2.lan f2.lan.buetow.org +</pre> +<br /> +<span>For the Rocky VMs, we add those to the FreeBSD host systems as well:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts +<font color="#000000">192.168</font>.<font color="#000000">1.120</font> r0 r0.lan r0.lan.buetow.org +<font color="#000000">192.168</font>.<font color="#000000">1.121</font> r1 r1.lan r1.lan.buetow.org +<font color="#000000">192.168</font>.<font color="#000000">1.122</font> r2 r2.lan r2.lan.buetow.org +END +</pre> +<br /> +<span>And we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address <font color="#000000">192.168</font>.<font color="#000000">1.120</font>/<font color="#000000">24</font> +[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway <font color="#000000">192.168</font>.<font color="#000000">1.1</font> +[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS <font color="#000000">192.168</font>.<font color="#000000">1.1</font> +[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual +[root@r0 ~] % dnmcli connection down enp0s5 +[root@r0 ~] % dnmcli connection up enp0s5 +[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org +[root@r0 ~] % cat <<END >>/etc/hosts +<font color="#000000">192.168</font>.<font color="#000000">1.120</font> r0 r0.lan r0.lan.buetow.org +<font color="#000000">192.168</font>.<font color="#000000">1.121</font> r1 r1.lan r1.lan.buetow.org +<font color="#000000">192.168</font>.<font color="#000000">1.122</font> r2 r2.lan r2.lan.buetow.org +END +</pre> +<br /> +<span>Whereas:</span><br /> +<br /> +<ul> +<li><span class='inlinecode'>192.168.1.120</span> is the IP of the VM itself (here: <span class='inlinecode'>r0.lan.buetow.org</span>)</li> +<li><span class='inlinecode'>192.168.1.1</span> is the address of my home router, which also does DNS.</li> +</ul><br /> +<h3 style='display: inline' id='permitting-root-login'>Permitting root login</h3><br /> +<br /> +<span>As these VMs aren't directly reachable via SSH from the internet, we enable <span class='inlinecode'>root</span> login by adding a line with <span class='inlinecode'>PermitRootLogin yes</span> to <span class='inlinecode'>/etc/sshd/sshd_config</span>.</span><br /> +<br /> +<span>Once done, we reboot the VM by running <span class='inlinecode'>reboot</span> inside the VM to test whether everything was configured and persisted correctly.</span><br /> +<br /> +<span>After reboot, I copied my public key from my Laptop to the 3 VMs:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>% <b><u><font color="#000000">for</font></u></b> i <b><u><font color="#000000">in</font></u></b> <font color="#000000">0</font> <font color="#000000">1</font> <font color="#000000">2</font>; <b><u><font color="#000000">do</font></u></b> ssh-copy-id root@r$i.lan.buetow.org; <b><u><font color="#000000">done</font></u></b> +</pre> +<br /> +<span>Then, I edited the <span class='inlinecode'>/etc/ssh/sshd_config</span> file again on all 3 VMs and configured <span class='inlinecode'>PasswordAuthentication no</span> to only allow SSH key authentication from now on.</span><br /> +<br /> +<h3 style='display: inline' id='install-latest-updates'>Install latest updates</h3><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~] % dnf update +[root@r0 ~] % reboot +</pre> +<br /> +<h2 style='display: inline' id='stress-testing-cpu'>Stress testing CPU</h2><br /> +<br /> +<span>The aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre><b><u><font color="#000000">package</font></u></b> main + +<b><u><font color="#000000">import</font></u></b> <font color="#808080">"testing"</font> + +<b><u><font color="#000000">func</font></u></b> BenchmarkCPUSilly1(b *testing.B) { + <b><u><font color="#000000">for</font></u></b> i := <font color="#000000">0</font>; i < b.N; i++ { + _ = i * i + } +} + +<b><u><font color="#000000">func</font></u></b> BenchmarkCPUSilly2(b *testing.B) { + <b><u><font color="#000000">var</font></u></b> sillyResult <b><font color="#000000">float64</font></b> + <b><u><font color="#000000">for</font></u></b> i := <font color="#000000">0</font>; i < b.N; i++ { + sillyResult += <b><font color="#000000">float64</font></b>(i) + sillyResult *= <b><font color="#000000">float64</font></b>(i) + divisor := <b><font color="#000000">float64</font></b>(i) + <font color="#000000">1</font> + <b><u><font color="#000000">if</font></u></b> divisor > <font color="#000000">0</font> { + sillyResult /= divisor + } + } + _ = sillyResult <i><font color="silver">// to avoid compiler optimization</font></i> +} +</pre> +<br /> +<span>You can find the repository here:</span><br /> +<br /> +<a class='textlink' href='https://codeberg.org/snonux/sillybench'>https://codeberg.org/snonux/sillybench</a><br /> +<br /> +<h3 style='display: inline' id='silly-freebsd-host-benchmark'>Silly FreeBSD host benchmark</h3><br /> +<br /> +<span>To install it on FreeBSD, we run:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas pkg install git go +paul@f0:~ % mkdir ~/git && cd ~/git && \ + git clone https://codeberg.org/snonux/sillybench && \ + cd sillybench +</pre> +<br /> +<span>And to run it:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~/git/sillybench % go version +go version go1.<font color="#000000">24.1</font> freebsd/amd<font color="#000000">64</font> + +paul@f0:~/git/sillybench % go <b><u><font color="#000000">test</font></u></b> -bench=. +goos: freebsd +goarch: amd64 +pkg: codeberg.org/snonux/sillybench +cpu: Intel(R) N100 +BenchmarkCPUSilly1-<font color="#000000">4</font> <font color="#000000">1000000000</font> <font color="#000000">0.4022</font> ns/op +BenchmarkCPUSilly2-<font color="#000000">4</font> <font color="#000000">1000000000</font> <font color="#000000">0.4027</font> ns/op +PASS +ok codeberg.org/snonux/sillybench <font color="#000000">0</font>.891s +</pre> +<br /> +<h3 style='display: inline' id='silly-rocky-linux-vm--bhyve-benchmark'>Silly Rocky Linux VM @ Bhyve benchmark</h3><br /> +<br /> +<span>OK, let's compare this with the Rocky Linux VM running on Bhyve:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 ~]<i><font color="silver"># dnf install golang git</font></i> +[root@r0 ~]<i><font color="silver"># mkdir ~/git && cd ~/git && \</font></i> + git clone https://codeberg.org/snonux/sillybench && \ + cd sillybench +</pre> +<br /> +<span>And to run it:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>[root@r0 sillybench]<i><font color="silver"># go version</font></i> +go version go1.<font color="#000000">22.9</font> (Red Hat <font color="#000000">1.22</font>.<font color="#000000">9</font>-<font color="#000000">2</font>.el9_5) linux/amd<font color="#000000">64</font> +[root@r0 sillybench]<i><font color="silver"># go test -bench=.</font></i> +goos: linux +goarch: amd64 +pkg: codeberg.org/snonux/sillybench +cpu: Intel(R) N100 +BenchmarkCPUSilly1-<font color="#000000">4</font> <font color="#000000">1000000000</font> <font color="#000000">0.4347</font> ns/op +BenchmarkCPUSilly2-<font color="#000000">4</font> <font color="#000000">1000000000</font> <font color="#000000">0.4345</font> ns/op +</pre> +<span>The Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.</span><br /> +<br /> +<h3 style='display: inline' id='silly-freebsd-vm--bhyve-benchmark'>Silly FreeBSD VM @ Bhyve benchmark</h3><br /> +<br /> +<span>But as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.</span><br /> +<br /> +<span>But here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won't use as many CPUs anyway):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>root@freebsd:~/git/sillybench <i><font color="silver"># go test -bench=.</font></i> +goos: freebsd +goarch: amd64 +pkg: codeberg.org/snonux/sillybench +cpu: Intel(R) N100 +BenchmarkCPUSilly1 <font color="#000000">1000000000</font> <font color="#000000">0.4273</font> ns/op +BenchmarkCPUSilly2 <font color="#000000">1000000000</font> <font color="#000000">0.4286</font> ns/op +PASS +ok codeberg.org/snonux/sillybench <font color="#000000">0</font>.949s +</pre> +<br /> +<span>It's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!</span><br /> +<br /> +<h2 style='display: inline' id='benchmarking-with-ubench'>Benchmarking with <span class='inlinecode'>ubench</span></h2><br /> +<br /> +<span>Let's run another, more sophisticated benchmark using <span class='inlinecode'>ubench</span>, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running <span class='inlinecode'>doas pkg install ubench</span>. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with <span class='inlinecode'>-s</span>, and then let it run at full speed in the second run.</span><br /> +<br /> +<h3 style='display: inline' id='freebsd-host-ubench-benchmark'>FreeBSD host <span class='inlinecode'>ubench</span> benchmark</h3><br /> +<br /> +<span>Single CPU:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas ubench -s <font color="#000000">1</font> +Unix Benchmark Utility v.<font color="#000000">0.3</font> +Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc. +Author: Sergei Viznyuk <sv@phystech.com> +http://www.phystech.com/download/ubench.html +FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64 +Ubench Single CPU: <font color="#000000">671010</font> (<font color="#000000">0</font>.40s) +Ubench Single MEM: <font color="#000000">1705237</font> (<font color="#000000">0</font>.48s) +----------------------------------- +Ubench Single AVG: <font color="#000000">1188123</font> + +</pre> +<br /> +<span>All CPUs (with all Bhyve VMs stopped):</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>paul@f0:~ % doas ubench +Unix Benchmark Utility v.<font color="#000000">0.3</font> +Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc. +Author: Sergei Viznyuk <sv@phystech.com> +http://www.phystech.com/download/ubench.html +FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64 +Ubench CPU: <font color="#000000">2660220</font> +Ubench MEM: <font color="#000000">3095182</font> +-------------------- +Ubench AVG: <font color="#000000">2877701</font> +</pre> +<br /> +<h3 style='display: inline' id='freebsd-vm--bhyve-ubench-benchmark'>FreeBSD VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</h3><br /> +<br /> +<span>Single CPU:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>root@freebsd:~ <i><font color="silver"># ubench -s 1</font></i> +Unix Benchmark Utility v.<font color="#000000">0.3</font> +Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc. +Author: Sergei Viznyuk <sv@phystech.com> +http://www.phystech.com/download/ubench.html +FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64 +Ubench Single CPU: <font color="#000000">672792</font> (<font color="#000000">0</font>.40s) +Ubench Single MEM: <font color="#000000">852757</font> (<font color="#000000">0</font>.48s) +----------------------------------- +Ubench Single AVG: <font color="#000000">762774</font> +</pre> +<br /> +<span>All CPUs:</span><br /> +<br /> +<!-- Generator: GNU source-highlight 3.1.9 +by Lorenzo Bettini +http://www.lorenzobettini.it +http://www.gnu.org/software/src-highlite --> +<pre>root@freebsd:~ <i><font color="silver"># ubench</font></i> +Unix Benchmark Utility v.<font color="#000000">0.3</font> +Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc. +Author: Sergei Viznyuk <sv@phystech.com> +http://www.phystech.com/download/ubench.html +FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64 +Ubench CPU: <font color="#000000">2652857</font> +swap_pager: out of swap space +swp_pager_getswapspace(<font color="#000000">27</font>): failed +swap_pager: out of swap space +swp_pager_getswapspace(<font color="#000000">18</font>): failed +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">43</font> freebsd kernel: pid <font color="#000000">862</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +swp_pager_getswapspace(<font color="#000000">6</font>): failed +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">46</font> freebsd kernel: pid <font color="#000000">863</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">47</font> freebsd kernel: pid <font color="#000000">864</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">48</font> freebsd kernel: pid <font color="#000000">865</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">49</font> freebsd kernel: pid <font color="#000000">861</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +Apr <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">51</font> freebsd kernel: pid <font color="#000000">839</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory +</pre> +<br /> +<span>The multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.</span><br /> +<br /> +<span>Also, during the benchmark, I noticed the <span class='inlinecode'>bhyve</span> process on the host was constantly using 399% of the CPU (all 4 CPUs).</span><br /> +<br /> +<pre> + PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND + 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve +</pre> +<br /> +<span>Overall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?</span><br /> +<br /> +<h3 style='display: inline' id='rocky-linux-vm--bhyve-ubench-benchmark'>Rocky Linux VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</h3><br /> +<br /> +<span>Unfortunately, I wasn't able to find <span class='inlinecode'>ubench</span> in any of the Rocky Linux repositories. So, I skipped this test.</span><br /> +<br /> +<h2 style='display: inline' id='conclusion'>Conclusion</h2><br /> +<br /> +<span>Having Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.</span><br /> +<br /> +<span>Future uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?</span><br /> +<br /> +<span>This flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.</span><br /> +<br /> +<span>See you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.</span><br /> +<br /> +<span>Other *BSD-related posts:</span><br /> +<br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)</a><br /> +<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> +<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> +<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> +<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br /> +<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br /> +<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br /> +<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let's Encrypt with OpenBSD and Rex</a><br /> +<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br /> +<br /> +<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br /> +<br /> +<a class='textlink' href='../'>Back to the main site</a><br /> + </div> + </content> + </entry> + <entry> <title>Sharing on Social Media with Gos v1.0.0</title> <link href="https://foo.zone/gemfeed/2025-03-05-sharing-on-social-media-with-gos.html" /> <id>https://foo.zone/gemfeed/2025-03-05-sharing-on-social-media-with-gos.html</id> @@ -476,7 +1078,7 @@ http://www.gnu.org/software/src-highlite --> <br /> <h3 style='display: inline' id='13-go-functions-can-have-methods'>13. Go functions can have methods</h3><br /> <br /> -<span>Functions on struct types? Well, know. Functions on types like <span class='inlinecode'>int</span> and <span class='inlinecode'>string</span>? It's also known of, but a bit lesser. Functions on function types? That sounds a bit funky, but it's possible, too! For demonstration, have a look at this snippet:</span><br /> +<span>Functions on struct types? Well known. Functions on types like <span class='inlinecode'>int</span> and <span class='inlinecode'>string</span>? It's also known of, but a bit lesser. Functions on function types? That sounds a bit funky, but it's possible, too! For demonstration, have a look at this snippet:</span><br /> <br /> <!-- Generator: GNU source-highlight 3.1.9 by Lorenzo Bettini @@ -522,7 +1124,7 @@ http://www.gnu.org/software/src-highlite --> <br /> <h3 style='display: inline' id='14--and-ss-are-treated-the-same'>14. ß and ss are treated the same</h3><br /> <br /> -<span>Know German? In German, the letter "sarp s" is written as ß. ß is treated the same as ss on macOS.</span><br /> +<span>Know German? In German, the letter "sharp s" is written as ß. ß is treated the same as ss on macOS.</span><br /> <br /> <span>On a case-insensitive file system like macOS, not only are uppercase and lowercase letters treated the same, but non-Latin characters like the German "ß" are also considered equivalent to their Latin counterparts (in this case, "ss").</span><br /> <br /> @@ -709,6 +1311,7 @@ This is perl, v5.<font color="#000000">8.8</font> built <b><u><font color="#0000 <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts (You are currently reading this)</a><br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <br /> <a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br /> <br /> @@ -730,6 +1333,7 @@ This is perl, v5.<font color="#000000">8.8</font> built <b><u><font color="#0000 <li>⇢ <a href='#power-outage-simulation'>Power outage simulation</a></li> <li>⇢ ⇢ <a href='#pulling-the-plug'>Pulling the plug</a></li> <li>⇢ ⇢ <a href='#restoring-power'>Restoring power</a></li> +<li>⇢ <a href='#conclusion'>Conclusion</a></li> </ul><br /> <h2 style='display: inline' id='introduction'>Introduction</h2><br /> <br /> @@ -1091,10 +1695,17 @@ Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd exiting, signal 15 Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd shutdown succeeded </pre> <br /> -<span>All good :-) See you in the next post of this series!</span><br /> +<span>All good :-)</span><br /> +<br /> +<h2 style='display: inline' id='conclusion'>Conclusion</h2><br /> +<br /> +<span>I have the same UPS (but with a bit more capacity) for my main work setup, which powers my 28" screen, music equipment, etc. It has already been helpful a couple of times during power outages here, so I am sure that the smaller UPS for the F3s setup will be of great use.</span><br /> +<br /> +<span>See you in the next post of this series!</span><br /> <br /> <span>Other BSD related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts (You are currently reading this)</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> @@ -1797,6 +2408,7 @@ http://www.gnu.org/software/src-highlite --> <br /> <span>These are all the posts so far:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> @@ -2130,6 +2742,7 @@ dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">2922</font> <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> @@ -2167,6 +2780,7 @@ dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">2922</font> <br /> <span>These are all the posts so far:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)</a><br /> @@ -2193,7 +2807,7 @@ dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">2922</font> <li>⇢ <a href='#monitoring-keeping-an-eye-on-everything'>Monitoring: Keeping an eye on everything</a></li> <li>⇢ ⇢ <a href='#prometheus-and-grafana'>Prometheus and Grafana</a></li> <li>⇢ ⇢ <a href='#gogios-my-custom-alerting-system'>Gogios: My custom alerting system</a></li> -<li>⇢ <a href='#what-s-after-this-all'>What's after this all?</a></li> +<li>⇢ <a href='#conclusion'>Conclusion</a></li> </ul><br /> <h2 style='display: inline' id='why-this-setup'>Why this setup?</h2><br /> <br /> @@ -2303,7 +2917,7 @@ dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">2922</font> <br /> <span>Ironically, I implemented Gogios to avoid using more complex alerting systems like Prometheus, but here we go—it integrates well now.</span><br /> <br /> -<h2 style='display: inline' id='what-s-after-this-all'>What's after this all?</h2><br /> +<h2 style='display: inline' id='conclusion'>Conclusion</h2><br /> <br /> <span>This setup may be just the beginning. Some ideas I'm thinking about for the future:</span><br /> <br /> @@ -2321,6 +2935,7 @@ dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">2922</font> <br /> <span>Other *BSD-related posts:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)</a><br /> @@ -4809,6 +5424,7 @@ http://www.gnu.org/software/src-highlite --> <br /> <span>Other *BSD and KISS related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> @@ -5170,6 +5786,7 @@ $ doas reboot <i><font color="silver"># Just in case, reboot one more time</font <br /> <span>Other *BSD related posts are:</span><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> <a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br /> <a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br /> @@ -8775,385 +9392,4 @@ nmap ,<b><u><font color="#000000">i</font></u></b> !wpbpaste<CR> </div> </content> </entry> - <entry> - <title>Installing DTail on OpenBSD</title> - <link href="https://foo.zone/gemfeed/2022-10-30-installing-dtail-on-openbsd.html" /> - <id>https://foo.zone/gemfeed/2022-10-30-installing-dtail-on-openbsd.html</id> - <updated>2022-10-30T11:03:19+02:00</updated> - <author> - <name>Paul Buetow aka snonux</name> - <email>paul@dev.buetow.org</email> - </author> - <summary>This will be a quick blog post, as I am busy with my personal life now. I have relocated to a different country and am still busy arranging things. So bear with me :-)</summary> - <content type="xhtml"> - <div xmlns="http://www.w3.org/1999/xhtml"> - <h1 style='display: inline' id='installing-dtail-on-openbsd'>Installing DTail on OpenBSD</h1><br /> -<br /> -<span class='quote'>Published at 2022-10-30T11:03:19+02:00</span><br /> -<br /> -<span>This will be a quick blog post, as I am busy with my personal life now. I have relocated to a different country and am still busy arranging things. So bear with me :-)</span><br /> -<br /> -<span> In this post, I want to give a quick overview (or how-to) about installing DTail on OpenBSD, as the official documentation only covers Red Hat and Fedora Linux! And this blog post will also be used as my reference!</span><br /> -<br /> -<a class='textlink' href='https://dtail.dev'>https://dtail.dev</a><br /> -<br /> -<span>I am using Rexify for my OpenBSD automation. Check out the following article covering my Rex setup in a little bit more detail:</span><br /> -<br /> -<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>Let's Encrypt with OpenBSD and Rex</a><br /> -<br /> -<span>I will also mention some relevant <span class='inlinecode'>Rexfile</span> snippets in this post!</span><br /> -<br /> -<pre> - ,_---~~~~~----._ - _,,_,*^____ _____``*g*\"*, -/ __/ /' ^. / \ ^@q f - @f | | | | 0 _/ -\`/ \~__((@/ __ \__((@/ \ - | _l__l_ I <--- The Go Gopher - } [______] I - ] | | | | - ] ~ ~ | - | | - | | - | | A ; -~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~~~,--,-/ \---,-/|~~,~~~~~~~~~~~~~~~~~~~~~~~~~~~ - _|\,'. /| /| `/|-. - \`.' /| , `;. - ,'\ A A A A _ /| `.; - ,/ _ A _ / _ /| ; - /\ / \ , , A / / `/| - /_| | _ \ , , ,/ \ - // | |/ `.\ ,- , , ,/ ,/ \/ - / @| |@ / /' \ \ , > /| ,--. - |\_/ \_/ / | | , ,/ \ ./' __:.. - | __ __ | | | .--. , > > |-' / ` - ,/| / ' \ | | | \ , | / - / |<--.__,->| | | . `. > > / ( - /_,' \\ ^ / \ / / `. >-- /^\ | - \\___/ \ / / \__' \ \ \/ \ | - `. |/ , , /`\ \ ) - \ ' |/ , V \ / `-\ - OpenBSD Puffy ---> `|/ ' V V \ \.' \_ - '`-. V V \./'\ - `|/-. \ / \ /,---`\ kat - / `._____V_____V' - ' ' -</pre> -<br /> -<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br /> -<br /> -<ul> -<li><a href='#installing-dtail-on-openbsd'>Installing DTail on OpenBSD</a></li> -<li>⇢ <a href='#compile-it'>Compile it</a></li> -<li>⇢ <a href='#install-it'>Install it</a></li> -<li>⇢ ⇢ <a href='#rexification'>Rexification</a></li> -<li>⇢ <a href='#configure-it'>Configure it</a></li> -<li>⇢ ⇢ <a href='#rexification'>Rexification</a></li> -<li>⇢ <a href='#update-the-key-cache-for-it'>Update the key cache for it</a></li> -<li>⇢ ⇢ <a href='#rexification'>Rexification</a></li> -<li>⇢ <a href='#start-it'>Start it</a></li> -<li>⇢ <a href='#use-it'>Use it</a></li> -<li>⇢ <a href='#conclusions'>Conclusions</a></li> -</ul><br /> -<h2 style='display: inline' id='compile-it'>Compile it</h2><br /> -<br /> -<span>First of all, DTail needs to be downloaded and compiled. For that, <span class='inlinecode'>git</span>, <span class='inlinecode'>go</span>, and <span class='inlinecode'>gmake</span> are required:</span><br /> -<br /> -<pre> -$ doas pkg_add git go gmake -</pre> -<br /> -<span>I am happy that the Go Programming Language is readily available in the OpenBSD packaging system. Once the dependencies got installed, clone DTail and compile it:</span><br /> -<br /> -<pre> -$ mkdir git -$ cd git -$ git clone https://github.com/mimecast/dtail -$ cd dtail -$ gmake -</pre> -<br /> -<span>You can verify the version by running the following command:</span><br /> -<br /> -<pre> -$ ./dtail --version - DTail 4.1.0 Protocol 4.1 Have a lot of fun! -$ file dtail - dtail: ELF 64-bit LSB executable, x86-64, version 1 -</pre> -<br /> -<span>Now, there isn't any need anymore to keep <span class='inlinecode'>git</span>, <span class='inlinecode'>go</span> and <span class='inlinecode'>gmake</span>, so they can be deinstalled now:</span><br /> -<br /> -<pre> -$ doas pkg_delete git go gmake -</pre> -<br /> -<span>One day I shall create an official OpenBSD port for DTail.</span><br /> -<br /> -<h2 style='display: inline' id='install-it'>Install it</h2><br /> -<br /> -<span>Installing the binaries is now just a matter of copying them to <span class='inlinecode'>/usr/local/bin</span> as follows:</span><br /> -<br /> -<pre> -$ for bin in dserver dcat dgrep dmap dtail dtailhealth; do - doas cp -p $bin /usr/local/bin/$bin - doas chown root:wheel /usr/local/bin/$bin -done -</pre> -<br /> -<span>Also, we will be creating the <span class='inlinecode'>_dserver</span> service user:</span><br /> -<br /> -<pre> -$ doas adduser -class nologin -group _dserver -batch _dserver -$ doas usermod -d /var/run/dserver/ _dserver -</pre> -<br /> -<span>The OpenBSD init script is created from scratch (not part of the official DTail project). Run the following to install the bespoke script:</span><br /> -<br /> -<pre> -$ cat <<'END' | doas tee /etc/rc.d/dserver -#!/bin/ksh - -daemon="/usr/local/bin/dserver" -daemon_flags="-cfg /etc/dserver/dtail.json" -daemon_user="_dserver" - -. /etc/rc.d/rc.subr - -rc_reload=NO - -rc_pre() { - install -d -o _dserver /var/log/dserver - install -d -o _dserver /var/run/dserver/cache -} - -rc_cmd $1 & -END -$ doas chmod 755 /etc/rc.d/dserver -</pre> -<br /> -<h3 style='display: inline' id='rexification'>Rexification</h3><br /> -<br /> -<span>This is the task for setting it up via Rex. Note the <span class='inlinecode'>. . . .</span>, that's a placeholder which we will fill up more and more during this blog post:</span><br /> -<br /> -<pre> -desc 'Setup DTail'; -task 'dtail', group => 'frontends', - sub { - my $restart = FALSE; - - file '/etc/rc.d/dserver': - content => template('./etc/rc.d/dserver.tpl'), - owner => 'root', - group => 'wheel', - mode => '755', - on_change => sub { $restart = TRUE }; - - . - . - . - . - - service 'dserver' => 'restart' if $restart; - service 'dserver', ensure => 'started'; - }; -</pre> -<br /> -<h2 style='display: inline' id='configure-it'>Configure it</h2><br /> -<br /> -<span>Now, DTail is fully installed but still needs to be configured. Grab the default config file from GitHub ...</span><br /> -<br /> -<pre> -$ doas mkdir /etc/dserver -$ curl https://raw.githubusercontent.com/mimecast/dtail/master/examples/dtail.json.examples | - doas tee /etc/dserver/dtail.json -</pre> -<br /> -<span>... and then edit it and adjust <span class='inlinecode'>LogDir</span> in the <span class='inlinecode'>Common</span> section to <span class='inlinecode'>/var/log/dserver</span>. The result will look like this:</span><br /> -<br /> -<pre> - "Common": { - "LogDir": "/var/log/dserver", - "Logger": "Fout", - "LogRotation": "Daily", - "CacheDir": "cache", - "SSHPort": 2222, - "LogLevel": "Info" - } -</pre> -<br /> -<h3 style='display: inline' id='rexification'>Rexification</h3><br /> -<br /> -<span>That's as simple as adding the following to the Rex task:</span><br /> -<br /> -<pre> -file '/etc/dserver', - ensure => 'directory'; - -file '/etc/dserver/dtail.json', - content => template('./etc/dserver/dtail.json.tpl'), - owner => 'root', - group => 'wheel', - mode => '755', - on_change => sub { $restart = TRUE }; -</pre> -<br /> -<h2 style='display: inline' id='update-the-key-cache-for-it'>Update the key cache for it</h2><br /> -<br /> -<span>DTail relies on SSH for secure authentication and communication. However, the system user <span class='inlinecode'>_dserver</span> has no permission to read the SSH public keys from the user's home directories, so the DTail server also checks for available public keys in an alternative path <span class='inlinecode'>/var/run/dserver/cache</span>. </span><br /> -<br /> -<span>The following script, populating the DTail server key cache, can be run periodically via <span class='inlinecode'>CRON</span>:</span><br /> -<br /> -<pre> -$ cat <<'END' | doas tee /usr/local/bin/dserver-update-key-cache.sh -#!/bin/ksh - -CACHEDIR=/var/run/dserver/cache -DSERVER_USER=_dserver -DSERVER_GROUP=_dserver - -echo 'Updating SSH key cache' - -ls /home/ | while read remoteuser; do - keysfile=/home/$remoteuser/.ssh/authorized_keys - - if [ -f $keysfile ]; then - cachefile=$CACHEDIR/$remoteuser.authorized_keys - echo "Caching $keysfile -> $cachefile" - - cp $keysfile $cachefile - chown $DSERVER_USER:$DSERVER_GROUP $cachefile - chmod 600 $cachefile - fi -done - -# Cleanup obsolete public SSH keys -find $CACHEDIR -name \*.authorized_keys -type f | -while read cachefile; do - remoteuser=$(basename $cachefile | cut -d. -f1) - keysfile=/home/$remoteuser/.ssh/authorized_keys - - if [ ! -f $keysfile ]; then - echo 'Deleting obsolete cache file $cachefile' - rm $cachefile - fi -done - -echo 'All set...' -END -$ doas chmod 500 /usr/local/bin/dserver-update-key-cache.sh -</pre> -<br /> -<span>Note that the script above is a slight variation of the official DTail script. The official DTail one is a <span class='inlinecode'>bash</span> script, but on OpenBSD, there's <span class='inlinecode'>ksh</span>. I run it once daily by adding it to the <span class='inlinecode'>daily.local</span>:</span><br /> -<br /> -<pre> -$ echo /usr/local/bin/dserver-update-key-cache.sh | doas tee -a /etc/daily.local -/usr/local/bin/dserver-update-key-cache.sh -</pre> -<br /> -<h3 style='display: inline' id='rexification'>Rexification</h3><br /> -<br /> -<span>That's done by adding ...</span><br /> -<br /> -<pre> -file '/usr/local/bin/dserver-update-key-cache.sh', - content => template('./scripts/dserver-update-key-cache.sh.tpl'), - owner => 'root', - group => 'wheel', - mode => '500'; - -append_if_no_such_line '/etc/daily.local', '/usr/local/bin/dserver-update-key-cache.sh'; -</pre> -<br /> -<span>... to the Rex task!</span><br /> -<br /> -<h2 style='display: inline' id='start-it'>Start it</h2><br /> -<br /> -<span>Now, it's time to enable and start the DTail server:</span><br /> -<br /> -<pre> -$ sudo rcctl enable dserver -$ sudo rcctl start dserver -$ tail -f /var/log/dserver/*.log -INFO|1022-090634|Starting scheduled job runner after 2s -INFO|1022-090634|Starting continuous job runner after 2s -INFO|1022-090644|24204|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnections=0 -INFO|1022-090654|24204|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnections=0 -INFO|1022-090719|Starting server|DTail 4.1.0 Protocol 4.1 Have a lot of fun! -INFO|1022-090719|Generating private server RSA host key -INFO|1022-090719|Starting server -INFO|1022-090719|Binding server|0.0.0.0:2222 -INFO|1022-090719|Starting scheduled job runner after 2s -INFO|1022-090719|Starting continuous job runner after 2s -INFO|1022-090729|86050|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnections=0 -INFO|1022-090739|86050|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnect -. -. -. -Ctr+C -</pre> -<br /> -<span>As we don't want to wait until tomorrow, let's populate the key cache manually:</span><br /> -<br /> -<pre> -$ doas /usr/local/bin/dserver-update-key-cache.sh -Updating SSH key cache -Caching /home/_dserver/.ssh/authorized_keys -> /var/cache/dserver/_dserver.authorized_keys -Caching /home/admin/.ssh/authorized_keys -> /var/cache/dserver/admin.authorized_keys -Caching /home/failunderd/.ssh/authorized_keys -> /var/cache/dserver/failunderd.authorized_keys -Caching /home/git/.ssh/authorized_keys -> /var/cache/dserver/git.authorized_keys -Caching /home/paul/.ssh/authorized_keys -> /var/cache/dserver/paul.authorized_keys -Caching /home/rex/.ssh/authorized_keys -> /var/cache/dserver/rex.authorized_keys -All set... -</pre> -<br /> -<h2 style='display: inline' id='use-it'>Use it</h2><br /> -<br /> -<span>The DTail server is now ready to serve connections. You can use any DTail commands, such as <span class='inlinecode'>dtail</span>, <span class='inlinecode'>dgrep</span>, <span class='inlinecode'>dmap</span>, <span class='inlinecode'>dcat</span>, <span class='inlinecode'>dtailhealth</span>, to do so. Checkout out all the usage examples on the official DTail page.</span><br /> -<br /> -<span>I have installed DTail server this way on my personal OpenBSD frontends <span class='inlinecode'>blowfish</span>, and <span class='inlinecode'>fishfinger</span>, and the following command connects as user <span class='inlinecode'>rex</span> to both machines and greps the file <span class='inlinecode'>/etc/fstab</span> for the string <span class='inlinecode'>local</span>:</span><br /> -<br /> -<pre> -❯ ./dgrep -user rex -servers blowfish.buetow.org,fishfinger.buetow.org --regex local /etc/fstab -CLIENT|earth|WARN|Encountered unknown host|{blowfish.buetow.org:2222 0xc0000a00f0 0xc0000a61e0 [blowfish.buetow.org]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9ZnF/LAk14SgqCzk38yENVTNfqibcluMTuKx1u53cKSp2xwHWzy0Ni5smFPpJDIQQljQEJl14ZdXvhhjp1kKHxJ79ubqRtIXBlC0PhlnP8Kd+mVLLHYpH9VO4rnaSfHE1kBjWkI7U6lLc6ks4flgAgGTS5Bb7pLAjwdWg794GWcnRh6kSUEQd3SftANqQLgCunDcP2Vc4KR9R78zBmEzXH/OPzl/ANgNA6wWO2OoKKy2VrjwVAab6FW15h3Lr6rYIw3KztpG+UMmEj5ReexIjXi/jUptdnUFWspvAmzIl6kwzzF8ExVyT9D75JRuHvmxXKKjyJRxqb8UnSh2JD4JN [23.88.35.144]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9ZnF/LAk14SgqCzk38yENVTNfqibcluMTuKx1u53cKSp2xwHWzy0Ni5smFPpJDIQQljQEJl14ZdXvhhjp1kKHxJ79ubqRtIXBlC0PhlnP8Kd+mVLLHYpH9VO4rnaSfHE1kBjWkI7U6lLc6ks4flgAgGTS5Bb7pLAjwdWg794GWcnRh6kSUEQd3SftANqQLgCunDcP2Vc4KR9R78zBmEzXH/OPzl/ANgNA6wWO2OoKKy2VrjwVAab6FW15h3Lr6rYIw3KztpG+UMmEj5ReexIjXi/jUptdnUFWspvAmzIl6kwzzF8ExVyT9D75JRuHvmxXKKjyJRxqb8UnSh2JD4JN 0xc0000a2180} -CLIENT|earth|WARN|Encountered unknown host|{fishfinger.buetow.org:2222 0xc0000a0150 0xc000460110 [fishfinger.buetow.org]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNiikdL7+tWSN0rCaw1tOd9aQgeUFgb830V9ejkyJ5h93PKLCWZSMMCtiabc1aUeUZR//rZjcPHFLuLq/YC+Y3naYtGd6j8qVrcfG8jy3gCbs4tV9SZ9qd5E24mtYqYdGlee6JN6kEWhJxFkEwPfNlG+YAr3KC8lvEAE2JdWvaZavqsqMvHZtAX3b25WCBf2HGkyLZ+d9cnimRUOt+/+353BQFCEct/2mhMVlkr4I23CY6Tsufx0vtxx25nbFdZias6wmhxaE9p3LiWXygPWGU5iZ4RSQSImQz4zyOc9rnJeP1rwGk0OWDJhdKNXuf0kIPdzMfwxv2otgY32/DJj6L [46.23.94.99]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNiikdL7+tWSN0rCaw1tOd9aQgeUFgb830V9ejkyJ5h93PKLCWZSMMCtiabc1aUeUZR//rZjcPHFLuLq/YC+Y3naYtGd6j8qVrcfG8jy3gCbs4tV9SZ9qd5E24mtYqYdGlee6JN6kEWhJxFkEwPfNlG+YAr3KC8lvEAE2JdWvaZavqsqMvHZtAX3b25WCBf2HGkyLZ+d9cnimRUOt+/+353BQFCEct/2mhMVlkr4I23CY6Tsufx0vtxx25nbFdZias6wmhxaE9p3LiWXygPWGU5iZ4RSQSImQz4zyOc9rnJeP1rwGk0OWDJhdKNXuf0kIPdzMfwxv2otgY32/DJj6L 0xc0000a2240} -Encountered 2 unknown hosts: 'blowfish.buetow.org:2222,fishfinger.buetow.org:2222' -Do you want to trust these hosts?? (y=yes,a=all,n=no,d=details): a -CLIENT|earth|INFO|STATS:STATS|cgocalls=11|cpu=8|connected=2|servers=2|connected%=100|new=2|throttle=0|goroutines=19 -CLIENT|earth|INFO|Added hosts to known hosts file|/home/paul/.ssh/known_hosts -REMOTE|blowfish|100|7|fstab|31bfd9d9a6788844.h /usr/local ffs rw,wxallowed,nodev 1 2 -REMOTE|fishfinger|100|7|fstab|093f510ec5c0f512.h /usr/local ffs rw,wxallowed,nodev 1 2 -</pre> -<br /> -<span>Running it the second time, and given that you trusted the keys the first time, it won't prompt you for the host keys anymore:</span><br /> -<br /> -<pre> -❯ ./dgrep -user rex -servers blowfish.buetow.org,fishfinger.buetow.org --regex local /etc/fstab -REMOTE|blowfish|100|7|fstab|31bfd9d9a6788844.h /usr/local ffs rw,wxallowed,nodev 1 2 -REMOTE|fishfinger|100|7|fstab|093f510ec5c0f512.h /usr/local ffs rw,wxallowed,nodev 1 2 -</pre> -<br /> -<h2 style='display: inline' id='conclusions'>Conclusions</h2><br /> -<br /> -<span>It's a bit of manual work, but it's ok on this small scale! I shall invest time in creating an official OpenBSD port, though. That would render most of the manual steps obsolete, as outlined in this post!</span><br /> -<br /> -<span>Check out the following for more information:</span><br /> -<br /> -<a class='textlink' href='https://dtail.dev'>https://dtail.dev</a><br /> -<a class='textlink' href='https://github.com/mimecast/dtail'>https://github.com/mimecast/dtail</a><br /> -<a class='textlink' href='https://www.rexify.org'>https://www.rexify.org</a><br /> -<br /> -<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br /> -<br /> -<span>Other related posts are:</span><br /> -<br /> -<a class='textlink' href='./2023-09-25-dtail-usage-examples.html'>2023-09-25 DTail usage examples</a><br /> -<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD (You are currently reading this)</a><br /> -<a class='textlink' href='./2022-03-06-the-release-of-dtail-4.0.0.html'>2022-03-06 The release of DTail 4.0.0</a><br /> -<a class='textlink' href='./2021-04-22-dtail-the-distributed-log-tail-program.html'>2021-04-22 DTail - The distributed log tail program</a><br /> -<br /> -<a class='textlink' href='../'>Back to the main site</a><br /> - </div> - </content> - </entry> </feed> diff --git a/gemfeed/f3s-kubernetes-with-freebsd-part-4/1.png b/gemfeed/f3s-kubernetes-with-freebsd-part-4/1.png Binary files differnew file mode 100644 index 00000000..342e47ab --- /dev/null +++ b/gemfeed/f3s-kubernetes-with-freebsd-part-4/1.png diff --git a/gemfeed/f3s-kubernetes-with-freebsd-part-4/2.png b/gemfeed/f3s-kubernetes-with-freebsd-part-4/2.png Binary files differnew file mode 100644 index 00000000..12528b98 --- /dev/null +++ b/gemfeed/f3s-kubernetes-with-freebsd-part-4/2.png diff --git a/gemfeed/f3s-kubernetes-with-freebsd-part-4/3.png b/gemfeed/f3s-kubernetes-with-freebsd-part-4/3.png Binary files differnew file mode 100644 index 00000000..1396859c --- /dev/null +++ b/gemfeed/f3s-kubernetes-with-freebsd-part-4/3.png diff --git a/gemfeed/f3s-kubernetes-with-freebsd-part-4/4.png b/gemfeed/f3s-kubernetes-with-freebsd-part-4/4.png Binary files differnew file mode 100644 index 00000000..7ac32b02 --- /dev/null +++ b/gemfeed/f3s-kubernetes-with-freebsd-part-4/4.png diff --git a/gemfeed/helix-gpt.log b/gemfeed/helix-gpt.log new file mode 100644 index 00000000..8808371e --- /dev/null +++ b/gemfeed/helix-gpt.log @@ -0,0 +1,12330 @@ +APP 2025-04-04T19:15:11.463Z --> triggerCharacters: | ["{","("," "] + +APP 2025-04-04T19:15:11.470Z --> received request: | {"jsonrpc":"2.0","method":"initialize","params":{"capabilities":{"general":{"positionEncodings":["utf-8","utf-32","utf-16"]},"textDocument":{"codeAction":{"codeActionLiteralSupport":{"codeActionKind":{"valueSet":["","quickfix","refactor","refactor.extract","refactor.inline","refactor.rewrite","source","source.organizeImports"]}},"dataSupport":true,"disabledSupport":true,"isPreferredSupport":true,"resolveSupport":{"properties":["edit","command"]}},"completion":{"completionItem":{"deprecatedSupport":true,"insertReplaceSupport":true,"resolveSupport":{"properties":["documentation","detail","additionalTextEdits"]},"snippetSupport":true,"tagSupport":{"valueSet":[1]}},"completionItemKind":{}},"formatting":{"dynamicRegistration":false},"hover":{"contentFormat":["markdown"]},"inlayHint":{"dynamicRegistration":false},"publishDiagnostics":{"tagSupport":{"valueSet":[1,2]},"versionSupport":true},"rename":{"dynamicRegistration":false,"honorsChangeAnnotations":false,"prepareSupport":true},"signatureHelp":{"signatureInformation":{"activeParameterSupport":true,"documentationFormat":["markdown"],"parameterInformation":{"labelOffsetSupport":true}}}},"window":{"workDoneProgress":true},"workspace":{"applyEdit":true,"configuration":true,"didChangeConfiguration":{"dynamicRegistration":false},"didChangeWatchedFiles":{"dynamicRegistration":true,"relativePatternSupport":false},"executeCommand":{"dynamicRegistration":false},"fileOperations":{"didRename":true,"willRename":true},"inlayHint":{"refreshSupport":false},"symbol":{"dynamicRegistration":false},"workspaceEdit":{"documentChanges":true,"failureHandling":"abort","normalizesLineEndings":false,"resourceOperations":["create","rename","delete"]},"workspaceFolders":true}},"clientInfo":{"name":"helix","version":"25.01.1"},"processId":1676255,"rootPath":"/home/paul/git/foo.zone-content/gemtext","rootUri":"file:///home/paul/git/foo.zone-content/gemtext","workspaceFolders":[{"name":"gemtext","uri":"file:///home/paul/git/foo.zone-content/gemtext"}]},"id":0} + +APP 2025-04-04T19:15:11.471Z --> sent request | {"jsonrpc":"2.0","id":0,"result":{"capabilities":{"codeActionProvider":true,"executeCommandProvider":{"commands":["resolveDiagnostics","generateDocs","improveCode","refactorFromComment","writeTest"]},"completionProvider":{"resolveProvider":false,"triggerCharacters":["{","("," "]},"textDocumentSync":{"change":1,"openClose":true}}}} + +APP 2025-04-04T19:15:11.473Z --> received didOpen | language: markdown + +APP 2025-04-04T19:15:18.482Z --> received didChange | language: markdown | contentVersion: 83 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:15:40.557Z --> received didChange | language: markdown | contentVersion: 84 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:15:40.767Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":48,"line":131},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":1} + +APP 2025-04-04T19:15:40.892Z --> received didChange | language: markdown | contentVersion: 85 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:15:40.972Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, w run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only 3 VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":85} + +APP 2025-04-04T19:15:40.972Z --> skipping because content is stale + +APP 2025-04-04T19:15:40.972Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:15:40.972Z --> sent request | {"jsonrpc":"2.0","id":1,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:15:41.072Z --> received didChange | language: markdown | contentVersion: 86 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:04.452Z --> received didChange | language: markdown | contentVersion: 87 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:04.523Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":194,"line":147},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":2} + +APP 2025-04-04T19:16:04.725Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for my purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":87} + +APP 2025-04-04T19:16:04.726Z --> calling completion event + +APP 2025-04-04T19:16:04.726Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":147,"character":0},"end":{"line":148,"character":0}}}] + +APP 2025-04-04T19:16:04.726Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":147,"character":0},"end":{"line":148,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:16:04.727Z --> copilot | completion request + +APP 2025-04-04T19:16:04.728Z --> fetch | /copilot_internal/v2/token + +APP 2025-04-04T19:16:04.733Z --> received didChange | language: markdown | contentVersion: 88 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:04.843Z --> received didChange | language: markdown | contentVersion: 89 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:04.937Z --> received didChange | language: markdown | contentVersion: 90 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:04.977Z --> received didChange | language: markdown | contentVersion: 91 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:05.130Z --> received didChange | language: markdown | contentVersion: 92 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:05.196Z --> response | https://api.github.com/copilot_internal/v2/token | 200 + +APP 2025-04-04T19:16:05.197Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:16:05.916Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:16:05.918Z --> completion hints: | three VMs, and I wanted to see how the installation process looks like. So I decided to do it manually. + +APP 2025-04-04T19:16:05.919Z --> sent request | {"jsonrpc":"2.0","id":2,"result":{"isIncomplete":false,"items":[{"label":"three VMs, and I wanted to see how the installation process looks like. So I decided to do it manually.","kind":1,"preselect":true,"detail":"three VMs, and I wanted to see how the installation process looks like. So I decided to do it manually.","insertText":"three VMs, and I wanted to see how the installation process looks like. So I decided to do it manually.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":147,"character":297},"end":{"line":147,"character":398}}}]}]}} + +APP 2025-04-04T19:16:05.919Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:16:14.727Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:16:17.716Z --> received didChange | language: markdown | contentVersion: 93 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:17.882Z --> received didChange | language: markdown | contentVersion: 94 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:17.985Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":68,"line":151},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":3} + +APP 2025-04-04T19:16:18.187Z --> received didChange | language: markdown | contentVersion: 95 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:18.187Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for o purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":95} + +APP 2025-04-04T19:16:18.187Z --> skipping because content is stale + +APP 2025-04-04T19:16:18.187Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:16:18.188Z --> sent request | {"jsonrpc":"2.0","id":3,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:16:18.236Z --> received didChange | language: markdown | contentVersion: 96 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:18.311Z --> received didChange | language: markdown | contentVersion: 97 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:38.573Z --> received didChange | language: markdown | contentVersion: 98 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:38.693Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":22,"line":161},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":4} + +APP 2025-04-04T19:16:38.815Z --> received didChange | language: markdown | contentVersion: 99 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:38.895Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, wopened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":99} + +APP 2025-04-04T19:16:38.895Z --> skipping because content is stale + +APP 2025-04-04T19:16:38.895Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:16:38.895Z --> sent request | {"jsonrpc":"2.0","id":4,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:16:38.997Z --> received didChange | language: markdown | contentVersion: 100 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:39.248Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":24,"line":161},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":5} + +APP 2025-04-04T19:16:39.450Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, weopened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":100} + +APP 2025-04-04T19:16:39.451Z --> calling completion event + +APP 2025-04-04T19:16:39.451Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}}}] + +APP 2025-04-04T19:16:39.451Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:16:39.451Z --> copilot | completion request + +APP 2025-04-04T19:16:39.452Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:16:39.479Z --> received didChange | language: markdown | contentVersion: 101 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:39.487Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":25,"line":161},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":6} + +APP 2025-04-04T19:16:39.688Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, we opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":101} + +APP 2025-04-04T19:16:39.689Z --> calling completion event + +APP 2025-04-04T19:16:39.689Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}}}] + +APP 2025-04-04T19:16:39.689Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:16:39.689Z --> copilot | completion request + +APP 2025-04-04T19:16:39.690Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:16:39.691Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}}}] + +APP 2025-04-04T19:16:39.691Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:16:40.140Z --> received didChange | language: markdown | contentVersion: 102 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:40.268Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":25,"line":161},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":7} + +APP 2025-04-04T19:16:40.470Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, we the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":102} + +APP 2025-04-04T19:16:40.471Z --> calling completion event + +APP 2025-04-04T19:16:40.471Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}}}] + +APP 2025-04-04T19:16:40.471Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:16:40.471Z --> copilot | completion request + +APP 2025-04-04T19:16:40.471Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:16:40.472Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}}}] + +APP 2025-04-04T19:16:40.472Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:16:40.483Z --> received didChange | language: markdown | contentVersion: 103 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:40.563Z --> received didChange | language: markdown | contentVersion: 104 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:40.849Z --> received didChange | language: markdown | contentVersion: 105 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:40.916Z --> received didChange | language: markdown | contentVersion: 106 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:41.010Z --> received didChange | language: markdown | contentVersion: 107 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:41.014Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":30,"line":161},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":8} + +APP 2025-04-04T19:16:41.216Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, we open the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":107} + +APP 2025-04-04T19:16:41.216Z --> calling completion event + +APP 2025-04-04T19:16:41.217Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}}}] + +APP 2025-04-04T19:16:41.217Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:16:41.217Z --> copilot | completion request + +APP 2025-04-04T19:16:41.217Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:16:41.218Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}}}] + +APP 2025-04-04T19:16:41.218Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":161,"character":0},"end":{"line":162,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:16:42.169Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:16:42.169Z --> completion hints: | the VNC connection to the VM. I used `Remmina` on my Linux laptop, but you can use any VNC client. The IP address is ` + +APP 2025-04-04T19:16:42.169Z --> sent request | {"jsonrpc":"2.0","id":8,"result":{"isIncomplete":false,"items":[{"label":"the VNC connection to the VM. I used `Remmina` on my Linux laptop, but you can use any VNC client. The IP address is `","kind":1,"preselect":true,"detail":"the VNC connection to the VM. I used `Remmina` on my Linux laptop, but you can use any VNC client. The IP address is `","insertText":"the VNC connection to the VM. I used `Remmina` on my Linux laptop, but you can use any VNC client. The IP address is `","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":161,"character":148},"end":{"line":161,"character":507}}}]}]}} + +APP 2025-04-04T19:16:42.169Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:16:43.326Z --> received didChange | language: markdown | contentVersion: 108 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:43.962Z --> received didChange | language: markdown | contentVersion: 109 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:47.261Z --> received didChange | language: markdown | contentVersion: 110 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:47.905Z --> received didChange | language: markdown | contentVersion: 111 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:16:49.451Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:16:49.690Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:16:49.691Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:16:50.472Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:16:50.472Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:16:51.218Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:16:51.218Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:20:16.173Z --> received didChange | language: markdown | contentVersion: 112 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:16.357Z --> received didChange | language: markdown | contentVersion: 113 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:16.563Z --> received didChange | language: markdown | contentVersion: 114 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:16.874Z --> received didChange | language: markdown | contentVersion: 115 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:16.881Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":163},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":9} + +APP 2025-04-04T19:20:16.959Z --> received didChange | language: markdown | contentVersion: 116 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:16.965Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":3,"line":163},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":10} + +APP 2025-04-04T19:20:17.165Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> \n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":116} + +APP 2025-04-04T19:20:17.166Z --> calling completion event + +APP 2025-04-04T19:20:17.166Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":163,"character":0},"end":{"line":164,"character":0}}}] + +APP 2025-04-04T19:20:17.166Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":163,"character":0},"end":{"line":164,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:20:17.166Z --> copilot | completion request + +APP 2025-04-04T19:20:17.167Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:20:17.718Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:20:17.719Z --> completion hints: | https://www.gnome.org/ + +APP 2025-04-04T19:20:17.719Z --> sent request | {"jsonrpc":"2.0","id":10,"result":{"isIncomplete":false,"items":[{"label":"https://www.gnome.org/","kind":1,"preselect":true,"detail":"https://www.gnome.org/","insertText":"https://www.gnome.org/","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":163,"character":25},"end":{"line":163,"character":25}}}]}]}} + +APP 2025-04-04T19:20:17.719Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:20:17.883Z --> received didChange | language: markdown | contentVersion: 117 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:17.921Z --> received didChange | language: markdown | contentVersion: 118 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:17.927Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":5,"line":163},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":11} + +APP 2025-04-04T19:20:18.128Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":118} + +APP 2025-04-04T19:20:18.128Z --> calling completion event + +APP 2025-04-04T19:20:18.128Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":163,"character":0},"end":{"line":164,"character":0}}}] + +APP 2025-04-04T19:20:18.128Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":163,"character":0},"end":{"line":164,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:20:18.128Z --> copilot | completion request + +APP 2025-04-04T19:20:18.129Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:20:18.536Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:20:18.536Z --> completion hints: | f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo + +APP 2025-04-04T19:20:18.537Z --> sent request | {"jsonrpc":"2.0","id":11,"result":{"isIncomplete":false,"items":[{"label":"f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo","kind":1,"preselect":true,"detail":"f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo","insertText":"f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":163,"character":60},"end":{"line":163,"character":60}}}]}]}} + +APP 2025-04-04T19:20:18.537Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:20:20.245Z --> received didChange | language: markdown | contentVersion: 119 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:24.483Z --> received didChange | language: markdown | contentVersion: 133 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:24.488Z --> received didChange | language: markdown | contentVersion: 134 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:24.494Z --> received didChange | language: markdown | contentVersion: 135 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:24.502Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":40,"line":163},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":12} + +APP 2025-04-04T19:20:24.704Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":135} + +APP 2025-04-04T19:20:24.704Z --> calling completion event + +APP 2025-04-04T19:20:24.704Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":163,"character":0},"end":{"line":164,"character":0}}}] + +APP 2025-04-04T19:20:24.704Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":163,"character":0},"end":{"line":164,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:20:24.705Z --> copilot | completion request + +APP 2025-04-04T19:20:24.705Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:20:24.987Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:20:24.988Z --> completion hints: | f0-vnc.png VNC connection to the VM + +APP 2025-04-04T19:20:24.988Z --> sent request | {"jsonrpc":"2.0","id":12,"result":{"isIncomplete":false,"items":[{"label":"f0-vnc.png VNC connection to the VM","kind":1,"preselect":true,"detail":"f0-vnc.png VNC connection to the VM","insertText":"f0-vnc.png VNC connection to the VM","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":163,"character":75},"end":{"line":163,"character":75}}}]}]}} + +APP 2025-04-04T19:20:24.988Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:20:25.690Z --> received didChange | language: markdown | contentVersion: 136 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:26.182Z --> received didChange | language: markdown | contentVersion: 137 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:26.535Z --> received didChange | language: markdown | contentVersion: 138 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:26.853Z --> received didChange | language: markdown | contentVersion: 139 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:26.930Z --> received didChange | language: markdown | contentVersion: 140 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:27.167Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:20:27.181Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":45,"line":163},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":13} + +APP 2025-04-04T19:20:27.383Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":140} + +APP 2025-04-04T19:20:27.383Z --> calling completion event + +APP 2025-04-04T19:20:27.383Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":163,"character":0},"end":{"line":164,"character":0}}}] + +APP 2025-04-04T19:20:27.383Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":163,"character":0},"end":{"line":164,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:20:27.383Z --> copilot | completion request + +APP 2025-04-04T19:20:27.384Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:20:27.581Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:20:27.582Z --> completion hints: | + +APP 2025-04-04T19:20:27.582Z --> sent request | {"jsonrpc":"2.0","id":13,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":163,"character":45},"end":{"line":163,"character":45}}}]}]}} + +APP 2025-04-04T19:20:27.582Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:20:28.128Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:20:34.704Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:20:35.925Z --> received didChange | language: markdown | contentVersion: 141 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:36.461Z --> received didChange | language: markdown | contentVersion: 142 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:37.385Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:20:39.813Z --> received didChange | language: markdown | contentVersion: 143 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:42.311Z --> received didChange | language: markdown | contentVersion: 144 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:43.572Z --> received didChange | language: markdown | contentVersion: 145 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:44.355Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":45,"line":169},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":14} + +APP 2025-04-04T19:20:44.557Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, I added the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":145} + +APP 2025-04-04T19:20:44.557Z --> calling completion event + +APP 2025-04-04T19:20:44.557Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":169,"character":0},"end":{"line":170,"character":0}}}] + +APP 2025-04-04T19:20:44.558Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":169,"character":0},"end":{"line":170,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:20:44.558Z --> copilot | completion request + +APP 2025-04-04T19:20:44.558Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:20:45.173Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:20:45.174Z --> completion hints: | + +APP 2025-04-04T19:20:45.174Z --> sent request | {"jsonrpc":"2.0","id":14,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":169,"character":45},"end":{"line":169,"character":45}}}]}]}} + +APP 2025-04-04T19:20:45.174Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:20:45.413Z --> received didChange | language: markdown | contentVersion: 146 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:46.281Z --> received didChange | language: markdown | contentVersion: 147 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:47.473Z --> received didChange | language: markdown | contentVersion: 148 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:20:54.558Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:04.437Z --> received didChange | language: markdown | contentVersion: 149 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:04.442Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":46,"line":179},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":15} + +APP 2025-04-04T19:22:04.645Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":149} + +APP 2025-04-04T19:22:04.645Z --> calling completion event + +APP 2025-04-04T19:22:04.646Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":179,"character":0},"end":{"line":180,"character":0}}}] + +APP 2025-04-04T19:22:04.646Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":179,"character":0},"end":{"line":180,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:22:04.646Z --> copilot | completion request + +APP 2025-04-04T19:22:04.646Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:22:04.974Z --> received didChange | language: markdown | contentVersion: 150 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:05.135Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:22:05.136Z --> completion hints: | we need to add the VM name to the `/etc/rc.conf` file. This is done by running the following command: + +APP 2025-04-04T19:22:05.136Z --> sent request | {"jsonrpc":"2.0","id":15,"result":{"isIncomplete":false,"items":[{"label":"we need to add the VM name to the `/etc/rc.conf` file. This is done by running the following command:","kind":1,"preselect":true,"detail":"we need to add the VM name to the `/etc/rc.conf` file. This is done by running the following command:","insertText":"we need to add the VM name to the `/etc/rc.conf` file. This is done by running the following command:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":179,"character":147},"end":{"line":179,"character":200}}}]}]}} + +APP 2025-04-04T19:22:05.136Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:05.164Z --> received didChange | language: markdown | contentVersion: 151 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:05.278Z --> received didChange | language: markdown | contentVersion: 152 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:05.284Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":49,"line":179},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":16} + +APP 2025-04-04T19:22:05.360Z --> received didChange | language: markdown | contentVersion: 153 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:05.437Z --> received didChange | language: markdown | contentVersion: 154 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:05.486Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we ad the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":154} + +APP 2025-04-04T19:22:05.486Z --> skipping because content is stale + +APP 2025-04-04T19:22:05.486Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:05.486Z --> sent request | {"jsonrpc":"2.0","id":16,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:22:05.597Z --> received didChange | language: markdown | contentVersion: 155 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:05.848Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":52,"line":179},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":17} + +APP 2025-04-04T19:22:06.050Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nI performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":155} + +APP 2025-04-04T19:22:06.050Z --> calling completion event + +APP 2025-04-04T19:22:06.050Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":179,"character":0},"end":{"line":180,"character":0}}}] + +APP 2025-04-04T19:22:06.051Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":179,"character":0},"end":{"line":180,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:22:06.051Z --> copilot | completion request + +APP 2025-04-04T19:22:06.051Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:22:06.468Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:22:06.470Z --> completion hints: | the following lines to `/etc/rc.conf`: + +APP 2025-04-04T19:22:06.470Z --> sent request | {"jsonrpc":"2.0","id":17,"result":{"isIncomplete":false,"items":[{"label":"the following lines to `/etc/rc.conf`:","kind":1,"preselect":true,"detail":"the following lines to `/etc/rc.conf`:","insertText":"the following lines to `/etc/rc.conf`:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":179,"character":90},"end":{"line":179,"character":143}}}]}]}} + +APP 2025-04-04T19:22:06.470Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:14.647Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:15.575Z --> received didChange | language: markdown | contentVersion: 156 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:15.916Z --> received didChange | language: markdown | contentVersion: 157 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:16.051Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:16.113Z --> received didChange | language: markdown | contentVersion: 158 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:16.339Z --> received didChange | language: markdown | contentVersion: 159 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:16.344Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":3,"line":175},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":18} + +APP 2025-04-04T19:22:16.545Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe performed the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":159} + +APP 2025-04-04T19:22:16.546Z --> calling completion event + +APP 2025-04-04T19:22:16.546Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":175,"character":0},"end":{"line":176,"character":0}}}] + +APP 2025-04-04T19:22:16.546Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":175,"character":0},"end":{"line":176,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:22:16.546Z --> copilot | completion request + +APP 2025-04-04T19:22:16.546Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:22:16.989Z --> received didChange | language: markdown | contentVersion: 160 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:17.119Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:22:17.121Z --> completion hints: | now have three Rocky Linux VMs running on Bhyve. The next steps are to configure the VMs, set up static IPs, and install the latest updates. + +APP 2025-04-04T19:22:17.121Z --> sent request | {"jsonrpc":"2.0","id":18,"result":{"isIncomplete":false,"items":[{"label":"now have three Rocky Linux VMs running on Bhyve. The next steps are to configure the VMs, set up static IPs, and install the latest updates.","kind":1,"preselect":true,"detail":"now have three Rocky Linux VMs running on Bhyve. The next steps are to configure the VMs, set up static IPs, and install the latest updates.","insertText":"now have three Rocky Linux VMs running on Bhyve. The next steps are to configure the VMs, set up static IPs, and install the latest updates.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":175,"character":143},"end":{"line":175,"character":274}}}]}]}} + +APP 2025-04-04T19:22:17.121Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:17.122Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":3,"line":175},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":19} + +APP 2025-04-04T19:22:17.323Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":160} + +APP 2025-04-04T19:22:17.324Z --> calling completion event + +APP 2025-04-04T19:22:17.324Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":175,"character":0},"end":{"line":176,"character":0}}}] + +APP 2025-04-04T19:22:17.324Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":175,"character":0},"end":{"line":176,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:22:17.324Z --> copilot | completion request + +APP 2025-04-04T19:22:17.324Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:22:17.336Z --> received didChange | language: markdown | contentVersion: 161 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:17.395Z --> received didChange | language: markdown | contentVersion: 162 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:17.483Z --> received didChange | language: markdown | contentVersion: 163 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:17.658Z --> received didChange | language: markdown | contentVersion: 164 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:17.713Z --> received didChange | language: markdown | contentVersion: 165 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:17.735Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:22:17.736Z --> completion hints: | now have three Rocky Linux VMs running on Bhyve. The next steps are to configure the VMs, set up static IP addresses, and install the latest updates. + +APP 2025-04-04T19:22:17.736Z --> sent request | {"jsonrpc":"2.0","id":19,"result":{"isIncomplete":false,"items":[{"label":"now have three Rocky Linux VMs running on Bhyve. The next steps are to configure the VMs, set up static IP addresses, and install the latest updates.","kind":1,"preselect":true,"detail":"now have three Rocky Linux VMs running on Bhyve. The next steps are to configure the VMs, set up static IP addresses, and install the latest updates.","insertText":"now have three Rocky Linux VMs running on Bhyve. The next steps are to configure the VMs, set up static IP addresses, and install the latest updates.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":175,"character":152},"end":{"line":175,"character":273}}}]}]}} + +APP 2025-04-04T19:22:17.736Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:17.837Z --> received didChange | language: markdown | contentVersion: 166 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:17.943Z --> received didChange | language: markdown | contentVersion: 167 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:18.014Z --> received didChange | language: markdown | contentVersion: 168 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:18.021Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":11,"line":175},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":20} + +APP 2025-04-04T19:22:18.222Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, I changed the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":168} + +APP 2025-04-04T19:22:18.222Z --> calling completion event + +APP 2025-04-04T19:22:18.223Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":175,"character":0},"end":{"line":176,"character":0}}}] + +APP 2025-04-04T19:22:18.223Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":175,"character":0},"end":{"line":176,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:22:18.223Z --> copilot | completion request + +APP 2025-04-04T19:22:18.223Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:22:18.549Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:22:18.550Z --> completion hints: | a few more steps to make the VMs usable and ready for the next steps in this series. + +APP 2025-04-04T19:22:18.550Z --> sent request | {"jsonrpc":"2.0","id":20,"result":{"isIncomplete":false,"items":[{"label":"a few more steps to make the VMs usable and ready for the next steps in this series.","kind":1,"preselect":true,"detail":"a few more steps to make the VMs usable and ready for the next steps in this series.","insertText":"a few more steps to make the VMs usable and ready for the next steps in this series.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":175,"character":95},"end":{"line":175,"character":216}}}]}]}} + +APP 2025-04-04T19:22:18.551Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:26.548Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:27.325Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:28.224Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:42.453Z --> received didChange | language: markdown | contentVersion: 169 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:42.458Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":12,"line":198},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":21} + +APP 2025-04-04T19:22:42.659Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":169} + +APP 2025-04-04T19:22:42.660Z --> calling completion event + +APP 2025-04-04T19:22:42.660Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":198,"character":0},"end":{"line":199,"character":0}}}] + +APP 2025-04-04T19:22:42.660Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":198,"character":0},"end":{"line":199,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:22:42.660Z --> copilot | completion request + +APP 2025-04-04T19:22:42.661Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:22:42.870Z --> received didChange | language: markdown | contentVersion: 170 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:43.050Z --> received didChange | language: markdown | contentVersion: 171 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:43.073Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:22:43.074Z --> completion hints: | we need to configure the static IP addresses for the VMs. I decided to use the following IPs: + +APP 2025-04-04T19:22:43.074Z --> sent request | {"jsonrpc":"2.0","id":21,"result":{"isIncomplete":false,"items":[{"label":"we need to configure the static IP addresses for the VMs. I decided to use the following IPs:","kind":1,"preselect":true,"detail":"we need to configure the static IP addresses for the VMs. I decided to use the following IPs:","insertText":"we need to configure the static IP addresses for the VMs. I decided to use the following IPs:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":198,"character":105},"end":{"line":198,"character":272}}}]}]}} + +APP 2025-04-04T19:22:43.074Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:43.164Z --> received didChange | language: markdown | contentVersion: 172 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:43.170Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":15,"line":198},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":22} + +APP 2025-04-04T19:22:43.274Z --> received didChange | language: markdown | contentVersion: 173 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:43.342Z --> received didChange | language: markdown | contentVersion: 174 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:43.370Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we ch the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":174} + +APP 2025-04-04T19:22:43.370Z --> skipping because content is stale + +APP 2025-04-04T19:22:43.370Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:43.370Z --> sent request | {"jsonrpc":"2.0","id":22,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:22:43.431Z --> received didChange | language: markdown | contentVersion: 175 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:43.544Z --> received didChange | language: markdown | contentVersion: 176 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:43.691Z --> received didChange | language: markdown | contentVersion: 177 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:43.942Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":20,"line":198},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":23} + +APP 2025-04-04T19:22:44.144Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we chane the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":177} + +APP 2025-04-04T19:22:44.145Z --> calling completion event + +APP 2025-04-04T19:22:44.145Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":198,"character":0},"end":{"line":199,"character":0}}}] + +APP 2025-04-04T19:22:44.145Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":198,"character":0},"end":{"line":199,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:22:44.145Z --> copilot | completion request + +APP 2025-04-04T19:22:44.146Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:22:44.681Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:22:44.684Z --> completion hints: | lled the network configuration to use static IPs. The VMs are configured to use DHCP by default, but I prefer to have static IPs for the VMs. I also added the hostnames to `/etc/hosts` on the FreeBSD hosts. + +APP 2025-04-04T19:22:44.684Z --> sent request | {"jsonrpc":"2.0","id":23,"result":{"isIncomplete":false,"items":[{"label":"lled the network configuration to use static IPs. The VMs are configured to use DHCP by default, but I prefer to have static IPs for the VMs. I also added the hostnames to `/etc/hosts` on the FreeBSD hosts.","kind":1,"preselect":true,"detail":"lled the network configuration to use static IPs. The VMs are configured to use DHCP by default, but I prefer to have static IPs for the VMs. I also added the hostnames to `/etc/hosts` on the FreeBSD hosts.","insertText":"lled the network configuration to use static IPs. The VMs are configured to use DHCP by default, but I prefer to have static IPs for the VMs. I also added the hostnames to `/etc/hosts` on the FreeBSD hosts.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":198,"character":226},"end":{"line":198,"character":393}}}]}]}} + +APP 2025-04-04T19:22:44.684Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:46.077Z --> received didChange | language: markdown | contentVersion: 178 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:46.184Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":15,"line":198},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":24} + +APP 2025-04-04T19:22:46.291Z --> received didChange | language: markdown | contentVersion: 179 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:46.385Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we cthe network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":179} + +APP 2025-04-04T19:22:46.385Z --> skipping because content is stale + +APP 2025-04-04T19:22:46.385Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:46.385Z --> sent request | {"jsonrpc":"2.0","id":24,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:22:46.386Z --> received didChange | language: markdown | contentVersion: 180 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:46.464Z --> received didChange | language: markdown | contentVersion: 181 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:46.576Z --> received didChange | language: markdown | contentVersion: 182 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:46.668Z --> received didChange | language: markdown | contentVersion: 183 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:46.719Z --> received didChange | language: markdown | contentVersion: 184 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:47.073Z --> received didChange | language: markdown | contentVersion: 185 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:47.079Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":22,"line":198},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":25} + +APP 2025-04-04T19:22:47.280Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, I added those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":185} + +APP 2025-04-04T19:22:47.280Z --> calling completion event + +APP 2025-04-04T19:22:47.280Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":198,"character":0},"end":{"line":199,"character":0}}}] + +APP 2025-04-04T19:22:47.280Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":198,"character":0},"end":{"line":199,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:22:47.280Z --> copilot | completion request + +APP 2025-04-04T19:22:47.281Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:22:47.631Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:22:47.632Z --> completion hints: | the network configuration of the VMs to static IPs. I decided to use the following IPs for the VMs: + +APP 2025-04-04T19:22:47.633Z --> sent request | {"jsonrpc":"2.0","id":25,"result":{"isIncomplete":false,"items":[{"label":"the network configuration of the VMs to static IPs. I decided to use the following IPs for the VMs:","kind":1,"preselect":true,"detail":"the network configuration of the VMs to static IPs. I decided to use the following IPs for the VMs:","insertText":"the network configuration of the VMs to static IPs. I decided to use the following IPs for the VMs:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":198,"character":121},"end":{"line":198,"character":287}}}]}]}} + +APP 2025-04-04T19:22:47.633Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:52.661Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:54.147Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:57.096Z --> received didChange | language: markdown | contentVersion: 186 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:57.102Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":19,"line":206},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":26} + +APP 2025-04-04T19:22:57.281Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:57.304Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":186} + +APP 2025-04-04T19:22:57.305Z --> calling completion event + +APP 2025-04-04T19:22:57.305Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":206,"character":0},"end":{"line":207,"character":0}}}] + +APP 2025-04-04T19:22:57.305Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":206,"character":0},"end":{"line":207,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:22:57.305Z --> copilot | completion request + +APP 2025-04-04T19:22:57.305Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:22:57.424Z --> received didChange | language: markdown | contentVersion: 187 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:57.617Z --> received didChange | language: markdown | contentVersion: 188 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:57.739Z --> received didChange | language: markdown | contentVersion: 189 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:57.745Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":22,"line":206},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":27} + +APP 2025-04-04T19:22:57.769Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:22:57.770Z --> completion hints: | I added the following lines to my `/etc/hosts` file on the FreeBSD hosts: + +APP 2025-04-04T19:22:57.770Z --> sent request | {"jsonrpc":"2.0","id":26,"result":{"isIncomplete":false,"items":[{"label":"I added the following lines to my `/etc/hosts` file on the FreeBSD hosts:","kind":1,"preselect":true,"detail":"I added the following lines to my `/etc/hosts` file on the FreeBSD hosts:","insertText":"I added the following lines to my `/etc/hosts` file on the FreeBSD hosts:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":206,"character":92},"end":{"line":206,"character":135}}}]}]}} + +APP 2025-04-04T19:22:57.770Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:57.812Z --> received didChange | language: markdown | contentVersion: 190 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:57.900Z --> received didChange | language: markdown | contentVersion: 191 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:22:57.947Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we ad those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configured the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":191} + +APP 2025-04-04T19:22:57.947Z --> skipping because content is stale + +APP 2025-04-04T19:22:57.947Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:22:57.947Z --> sent request | {"jsonrpc":"2.0","id":27,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:22:58.044Z --> received didChange | language: markdown | contentVersion: 192 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:07.308Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:15.139Z --> received didChange | language: markdown | contentVersion: 193 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:15.310Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":216},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":28} + +APP 2025-04-04T19:23:15.512Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":193} + +APP 2025-04-04T19:23:15.512Z --> calling completion event + +APP 2025-04-04T19:23:15.512Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":216,"character":0},"end":{"line":217,"character":0}}}] + +APP 2025-04-04T19:23:15.512Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":216,"character":0},"end":{"line":217,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:23:15.512Z --> copilot | completion request + +APP 2025-04-04T19:23:15.513Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:23:15.534Z --> received didChange | language: markdown | contentVersion: 194 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:15.636Z --> received didChange | language: markdown | contentVersion: 195 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:15.817Z --> received didChange | language: markdown | contentVersion: 196 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:15.871Z --> received didChange | language: markdown | contentVersion: 197 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:15.972Z --> received didChange | language: markdown | contentVersion: 198 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:16.077Z --> received didChange | language: markdown | contentVersion: 199 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:16.164Z --> received didChange | language: markdown | contentVersion: 200 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:16.204Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:23:16.205Z --> completion hints: | then we configure the VMs to use static IPs. The following commands are executed inside the VM `r0`: + +APP 2025-04-04T19:23:16.205Z --> sent request | {"jsonrpc":"2.0","id":28,"result":{"isIncomplete":false,"items":[{"label":"then we configure the VMs to use static IPs. The following commands are executed inside the VM `r0`:","kind":1,"preselect":true,"detail":"then we configure the VMs to use static IPs. The following commands are executed inside the VM `r0`:","insertText":"then we configure the VMs to use static IPs. The following commands are executed inside the VM `r0`:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":216,"character":104},"end":{"line":216,"character":244}}}]}]}} + +APP 2025-04-04T19:23:16.205Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:16.269Z --> received didChange | language: markdown | contentVersion: 201 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:16.314Z --> received didChange | language: markdown | contentVersion: 202 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:16.465Z --> received didChange | language: markdown | contentVersion: 203 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:16.470Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":14,"line":216},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":29} + +APP 2025-04-04T19:23:16.672Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":203} + +APP 2025-04-04T19:23:16.672Z --> calling completion event + +APP 2025-04-04T19:23:16.673Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":216,"character":0},"end":{"line":217,"character":0}}}] + +APP 2025-04-04T19:23:16.673Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":216,"character":0},"end":{"line":217,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:23:16.673Z --> copilot | completion request + +APP 2025-04-04T19:23:16.673Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:23:17.124Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:23:17.125Z --> completion hints: | the VMs to use static IPs. The following commands are executed inside the VM `r0` (the same for `r1` and `r2`, but with different IPs): + +APP 2025-04-04T19:23:17.125Z --> sent request | {"jsonrpc":"2.0","id":29,"result":{"isIncomplete":false,"items":[{"label":"the VMs to use static IPs. The following commands are executed inside the VM `r0` (the same for `r1` and `r2`, but with different IPs):","kind":1,"preselect":true,"detail":"the VMs to use static IPs. The following commands are executed inside the VM `r0` (the same for `r1` and `r2`, but with different IPs):","insertText":"the VMs to use static IPs. The following commands are executed inside the VM `r0` (the same for `r1` and `r2`, but with different IPs):","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":216,"character":149},"end":{"line":216,"character":289}}}]}]}} + +APP 2025-04-04T19:23:17.126Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:17.971Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":216},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":30} + +APP 2025-04-04T19:23:18.070Z --> received didChange | language: markdown | contentVersion: 204 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:18.173Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd wconfigure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":204} + +APP 2025-04-04T19:23:18.173Z --> skipping because content is stale + +APP 2025-04-04T19:23:18.173Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:18.173Z --> sent request | {"jsonrpc":"2.0","id":30,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:23:18.233Z --> received didChange | language: markdown | contentVersion: 205 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:18.312Z --> received didChange | language: markdown | contentVersion: 206 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:18.318Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":7,"line":216},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":31} + +APP 2025-04-04T19:23:18.519Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, I enabled `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":206} + +APP 2025-04-04T19:23:18.519Z --> calling completion event + +APP 2025-04-04T19:23:18.519Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":216,"character":0},"end":{"line":217,"character":0}}}] + +APP 2025-04-04T19:23:18.519Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":216,"character":0},"end":{"line":217,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:23:18.519Z --> copilot | completion request + +APP 2025-04-04T19:23:18.519Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:23:19.070Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:23:19.071Z --> completion hints: | configure the VMs to use static IPs. The following commands are executed inside the VM `r0` (the same for `r1` and `r2`, but with different IPs): + +APP 2025-04-04T19:23:19.071Z --> sent request | {"jsonrpc":"2.0","id":31,"result":{"isIncomplete":false,"items":[{"label":"configure the VMs to use static IPs. The following commands are executed inside the VM `r0` (the same for `r1` and `r2`, but with different IPs):","kind":1,"preselect":true,"detail":"configure the VMs to use static IPs. The following commands are executed inside the VM `r0` (the same for `r1` and `r2`, but with different IPs):","insertText":"configure the VMs to use static IPs. The following commands are executed inside the VM `r0` (the same for `r1` and `r2`, but with different IPs):","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":216,"character":152},"end":{"line":216,"character":302}}}]}]}} + +APP 2025-04-04T19:23:19.071Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:25.514Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:26.675Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:28.521Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:42.598Z --> received didChange | language: markdown | contentVersion: 207 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:42.803Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":66,"line":240},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":32} + +APP 2025-04-04T19:23:42.940Z --> received didChange | language: markdown | contentVersion: 208 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:43.005Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, w `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":208} + +APP 2025-04-04T19:23:43.005Z --> skipping because content is stale + +APP 2025-04-04T19:23:43.005Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:43.006Z --> sent request | {"jsonrpc":"2.0","id":32,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:23:43.123Z --> received didChange | language: markdown | contentVersion: 209 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:43.215Z --> received didChange | language: markdown | contentVersion: 210 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:43.220Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":69,"line":240},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":33} + +APP 2025-04-04T19:23:43.370Z --> received didChange | language: markdown | contentVersion: 211 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:43.422Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we e `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":211} + +APP 2025-04-04T19:23:43.422Z --> skipping because content is stale + +APP 2025-04-04T19:23:43.422Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:43.422Z --> sent request | {"jsonrpc":"2.0","id":33,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:23:43.464Z --> received didChange | language: markdown | contentVersion: 212 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:43.574Z --> received didChange | language: markdown | contentVersion: 213 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:43.659Z --> received didChange | language: markdown | contentVersion: 214 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:43.865Z --> received didChange | language: markdown | contentVersion: 215 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:43.969Z --> received didChange | language: markdown | contentVersion: 216 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:44.108Z --> received didChange | language: markdown | contentVersion: 217 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:44.114Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":76,"line":240},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":34} + +APP 2025-04-04T19:23:44.316Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, I rebooted the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":217} + +APP 2025-04-04T19:23:44.316Z --> calling completion event + +APP 2025-04-04T19:23:44.316Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":240,"character":0},"end":{"line":241,"character":0}}}] + +APP 2025-04-04T19:23:44.316Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":240,"character":0},"end":{"line":241,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:23:44.317Z --> copilot | completion request + +APP 2025-04-04T19:23:44.317Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:23:44.919Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:23:44.920Z --> completion hints: | root login via SSH. This is not recommended for production systems, but in this case, it is acceptable as the VMs are only accessible from the local network. + +APP 2025-04-04T19:23:44.920Z --> sent request | {"jsonrpc":"2.0","id":34,"result":{"isIncomplete":false,"items":[{"label":"root login via SSH. This is not recommended for production systems, but in this case, it is acceptable as the VMs are only accessible from the local network.","kind":1,"preselect":true,"detail":"root login via SSH. This is not recommended for production systems, but in this case, it is acceptable as the VMs are only accessible from the local network.","insertText":"root login via SSH. This is not recommended for production systems, but in this case, it is acceptable as the VMs are only accessible from the local network.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":240,"character":233},"end":{"line":240,"character":318}}}]}]}} + +APP 2025-04-04T19:23:44.921Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:52.628Z --> received didChange | language: markdown | contentVersion: 218 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:53.355Z --> received didChange | language: markdown | contentVersion: 219 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:53.449Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":11,"line":242},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":35} + +APP 2025-04-04T19:23:53.577Z --> received didChange | language: markdown | contentVersion: 220 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:53.652Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, w the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":220} + +APP 2025-04-04T19:23:53.652Z --> skipping because content is stale + +APP 2025-04-04T19:23:53.652Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:53.652Z --> sent request | {"jsonrpc":"2.0","id":35,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:23:53.742Z --> received didChange | language: markdown | contentVersion: 221 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:53.848Z --> received didChange | language: markdown | contentVersion: 222 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:53.853Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":14,"line":242},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":36} + +APP 2025-04-04T19:23:53.985Z --> received didChange | language: markdown | contentVersion: 223 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:54.040Z --> received didChange | language: markdown | contentVersion: 224 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:54.055Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we re the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":224} + +APP 2025-04-04T19:23:54.055Z --> skipping because content is stale + +APP 2025-04-04T19:23:54.055Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:54.055Z --> sent request | {"jsonrpc":"2.0","id":36,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:23:54.163Z --> received didChange | language: markdown | contentVersion: 225 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:54.317Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:23:54.403Z --> received didChange | language: markdown | contentVersion: 226 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:54.559Z --> received didChange | language: markdown | contentVersion: 227 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:54.610Z --> received didChange | language: markdown | contentVersion: 228 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:23:54.862Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":20,"line":242},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":37} + +APP 2025-04-04T19:23:55.064Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":228} + +APP 2025-04-04T19:23:55.065Z --> calling completion event + +APP 2025-04-04T19:23:55.065Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":242,"character":0},"end":{"line":243,"character":0}}}] + +APP 2025-04-04T19:23:55.065Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":242,"character":0},"end":{"line":243,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:23:55.065Z --> copilot | completion request + +APP 2025-04-04T19:23:55.066Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:23:55.642Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:23:55.643Z --> completion hints: | the VMs to make sure the network settings are applied correctly. + +APP 2025-04-04T19:23:55.643Z --> sent request | {"jsonrpc":"2.0","id":37,"result":{"isIncomplete":false,"items":[{"label":"the VMs to make sure the network settings are applied correctly.","kind":1,"preselect":true,"detail":"the VMs to make sure the network settings are applied correctly.","insertText":"the VMs to make sure the network settings are applied correctly.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":242,"character":84},"end":{"line":242,"character":192}}}]}]}} + +APP 2025-04-04T19:23:55.643Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:05.069Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:23.227Z --> received didChange | language: markdown | contentVersion: 229 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:23.354Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":14,"line":244},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":38} + +APP 2025-04-04T19:24:23.481Z --> received didChange | language: markdown | contentVersion: 230 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:23.556Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, wcopied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":230} + +APP 2025-04-04T19:24:23.556Z --> skipping because content is stale + +APP 2025-04-04T19:24:23.556Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:23.557Z --> sent request | {"jsonrpc":"2.0","id":38,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:24:23.657Z --> received didChange | language: markdown | contentVersion: 231 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:23.768Z --> received didChange | language: markdown | contentVersion: 232 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:23.774Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":17,"line":244},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":39} + +APP 2025-04-04T19:24:23.975Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, we copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":232} + +APP 2025-04-04T19:24:23.975Z --> calling completion event + +APP 2025-04-04T19:24:23.975Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}}}] + +APP 2025-04-04T19:24:23.976Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:24:23.976Z --> copilot | completion request + +APP 2025-04-04T19:24:23.976Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:24:24.370Z --> received didChange | language: markdown | contentVersion: 233 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:24.473Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":17,"line":244},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":40} + +APP 2025-04-04T19:24:24.674Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, we my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":233} + +APP 2025-04-04T19:24:24.675Z --> calling completion event + +APP 2025-04-04T19:24:24.675Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}}}] + +APP 2025-04-04T19:24:24.675Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:24:24.675Z --> copilot | completion request + +APP 2025-04-04T19:24:24.675Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:24:24.676Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}}}] + +APP 2025-04-04T19:24:24.676Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:24:24.679Z --> received didChange | language: markdown | contentVersion: 234 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:24.893Z --> received didChange | language: markdown | contentVersion: 235 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:25.165Z --> received didChange | language: markdown | contentVersion: 236 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:25.350Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:24:25.352Z --> completion hints: | can SSH into the VMs. I used `ssh-copy-id` to copy the SSH key from my FreeBSD host to the VMs. This is a bit easier than copying the public key manually. + +APP 2025-04-04T19:24:25.352Z --> sent request | {"jsonrpc":"2.0","id":40,"result":{"isIncomplete":false,"items":[{"label":"can SSH into the VMs. I used `ssh-copy-id` to copy the SSH key from my FreeBSD host to the VMs. This is a bit easier than copying the public key manually.","kind":1,"preselect":true,"detail":"can SSH into the VMs. I used `ssh-copy-id` to copy the SSH key from my FreeBSD host to the VMs. This is a bit easier than copying the public key manually.","insertText":"can SSH into the VMs. I used `ssh-copy-id` to copy the SSH key from my FreeBSD host to the VMs. This is a bit easier than copying the public key manually.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":244,"character":171},"end":{"line":244,"character":213}}}]}]}} + +APP 2025-04-04T19:24:25.352Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:25.451Z --> received didChange | language: markdown | contentVersion: 237 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:25.515Z --> received didChange | language: markdown | contentVersion: 238 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:25.522Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":22,"line":244},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":41} + +APP 2025-04-04T19:24:25.723Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, we copy my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":238} + +APP 2025-04-04T19:24:25.723Z --> calling completion event + +APP 2025-04-04T19:24:25.723Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}}}] + +APP 2025-04-04T19:24:25.723Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:24:25.724Z --> copilot | completion request + +APP 2025-04-04T19:24:25.724Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:24:26.143Z --> received didChange | language: markdown | contentVersion: 239 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:26.244Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:24:26.244Z --> completion hints: | the SSH keys from the FreeBSD host to the VMs. This is done by running the following command on the FreeBSD host: + +APP 2025-04-04T19:24:26.244Z --> sent request | {"jsonrpc":"2.0","id":41,"result":{"isIncomplete":false,"items":[{"label":"the SSH keys from the FreeBSD host to the VMs. This is done by running the following command on the FreeBSD host:","kind":1,"preselect":true,"detail":"the SSH keys from the FreeBSD host to the VMs. This is done by running the following command on the FreeBSD host:","insertText":"the SSH keys from the FreeBSD host to the VMs. This is done by running the following command on the FreeBSD host:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":244,"character":135},"end":{"line":244,"character":177}}}]}]}} + +APP 2025-04-04T19:24:26.244Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:26.310Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":22,"line":244},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":42} + +APP 2025-04-04T19:24:26.512Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, we copy public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":239} + +APP 2025-04-04T19:24:26.512Z --> calling completion event + +APP 2025-04-04T19:24:26.512Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}}}] + +APP 2025-04-04T19:24:26.512Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:24:26.512Z --> copilot | completion request + +APP 2025-04-04T19:24:26.513Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:24:26.542Z --> received didChange | language: markdown | contentVersion: 240 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:26.644Z --> received didChange | language: markdown | contentVersion: 241 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:26.757Z --> received didChange | language: markdown | contentVersion: 242 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:26.955Z --> received didChange | language: markdown | contentVersion: 243 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:26.960Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":26,"line":244},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":43} + +APP 2025-04-04T19:24:27.078Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:24:27.078Z --> completion hints: | the SSH keys from the FreeBSD host to the VMs. This is done by running the following command on the FreeBSD host: + +APP 2025-04-04T19:24:27.079Z --> sent request | {"jsonrpc":"2.0","id":42,"result":{"isIncomplete":false,"items":[{"label":"the SSH keys from the FreeBSD host to the VMs. This is done by running the following command on the FreeBSD host:","kind":1,"preselect":true,"detail":"the SSH keys from the FreeBSD host to the VMs. This is done by running the following command on the FreeBSD host:","insertText":"the SSH keys from the FreeBSD host to the VMs. This is done by running the following command on the FreeBSD host:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":244,"character":135},"end":{"line":244,"character":174}}}]}]}} + +APP 2025-04-04T19:24:27.079Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:27.162Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, we copy the public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":243} + +APP 2025-04-04T19:24:27.162Z --> calling completion event + +APP 2025-04-04T19:24:27.162Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}}}] + +APP 2025-04-04T19:24:27.162Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":244,"character":0},"end":{"line":245,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:24:27.162Z --> copilot | completion request + +APP 2025-04-04T19:24:27.163Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:24:27.881Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:24:27.883Z --> completion hints: | SSH key from the FreeBSD host to the VMs. I am using `ssh-copy-id` for this, which is a bit easier than copying the public key manually. The SSH key was created in the previous post of this series. + +APP 2025-04-04T19:24:27.883Z --> sent request | {"jsonrpc":"2.0","id":43,"result":{"isIncomplete":false,"items":[{"label":"SSH key from the FreeBSD host to the VMs. I am using `ssh-copy-id` for this, which is a bit easier than copying the public key manually. The SSH key was created in the previous post of this series.","kind":1,"preselect":true,"detail":"SSH key from the FreeBSD host to the VMs. I am using `ssh-copy-id` for this, which is a bit easier than copying the public key manually. The SSH key was created in the previous post of this series.","insertText":"SSH key from the FreeBSD host to the VMs. I am using `ssh-copy-id` for this, which is a bit easier than copying the public key manually. The SSH key was created in the previous post of this series.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":244,"character":223},"end":{"line":244,"character":262}}}]}]}} + +APP 2025-04-04T19:24:27.883Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:28.798Z --> received didChange | language: markdown | contentVersion: 244 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:29.012Z --> received didChange | language: markdown | contentVersion: 245 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:29.295Z --> received didChange | language: markdown | contentVersion: 246 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:29.673Z --> received didChange | language: markdown | contentVersion: 247 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:30.065Z --> received didChange | language: markdown | contentVersion: 248 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:30.663Z --> received didChange | language: markdown | contentVersion: 249 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:24:33.977Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:34.677Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:34.677Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:35.724Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:36.513Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:24:37.163Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:25:32.134Z --> received didChange | language: markdown | contentVersion: 250 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:21.049Z --> received didChange | language: markdown | contentVersion: 251 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:21.650Z --> received didChange | language: markdown | contentVersion: 252 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:22.211Z --> received didChange | language: markdown | contentVersion: 253 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:22.796Z --> received didChange | language: markdown | contentVersion: 254 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:22.802Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":1,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":44} + +APP 2025-04-04T19:27:23.004Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n#\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":254} + +APP 2025-04-04T19:27:23.004Z --> calling completion event + +APP 2025-04-04T19:27:23.005Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}}}] + +APP 2025-04-04T19:27:23.005Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:27:23.005Z --> copilot | completion request + +APP 2025-04-04T19:27:23.006Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:27:23.175Z --> received didChange | language: markdown | contentVersion: 255 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:23.179Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":45} + +APP 2025-04-04T19:27:23.380Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n##\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":255} + +APP 2025-04-04T19:27:23.380Z --> calling completion event + +APP 2025-04-04T19:27:23.381Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}}}] + +APP 2025-04-04T19:27:23.381Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:27:23.381Z --> copilot | completion request + +APP 2025-04-04T19:27:23.381Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:27:23.382Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}}}] + +APP 2025-04-04T19:27:23.382Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:27:23.437Z --> received didChange | language: markdown | contentVersion: 256 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:23.443Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":3,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":46} + +APP 2025-04-04T19:27:23.645Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n###\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":256} + +APP 2025-04-04T19:27:23.645Z --> calling completion event + +APP 2025-04-04T19:27:23.645Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}}}] + +APP 2025-04-04T19:27:23.645Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:27:23.646Z --> copilot | completion request + +APP 2025-04-04T19:27:23.646Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:27:23.646Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}}}] + +APP 2025-04-04T19:27:23.646Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:27:23.807Z --> received didChange | language: markdown | contentVersion: 257 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:23.815Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":47} + +APP 2025-04-04T19:27:23.975Z --> received didChange | language: markdown | contentVersion: 258 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:24.016Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n### F\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":258} + +APP 2025-04-04T19:27:24.017Z --> skipping because content is stale + +APP 2025-04-04T19:27:24.017Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:24.017Z --> sent request | {"jsonrpc":"2.0","id":47,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:27:24.164Z --> received didChange | language: markdown | contentVersion: 259 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:24.200Z --> received didChange | language: markdown | contentVersion: 260 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:24.215Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:27:24.215Z --> completion hints: | +```sh + +APP 2025-04-04T19:27:24.215Z --> sent request | {"jsonrpc":"2.0","id":46,"result":{"isIncomplete":false,"items":[{"label":"```sh","kind":1,"preselect":true,"detail":"```sh","insertText":"```sh","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":333,"character":8},"end":{"line":333,"character":8}}}]}]}} + +APP 2025-04-04T19:27:24.216Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:24.371Z --> received didChange | language: markdown | contentVersion: 261 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:24.623Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":8,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":48} + +APP 2025-04-04T19:27:24.824Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n### Free\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":261} + +APP 2025-04-04T19:27:24.824Z --> calling completion event + +APP 2025-04-04T19:27:24.824Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}}}] + +APP 2025-04-04T19:27:24.825Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:27:24.825Z --> copilot | completion request + +APP 2025-04-04T19:27:24.825Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:27:25.137Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:27:25.137Z --> completion hints: | BSD benchmark + +APP 2025-04-04T19:27:25.138Z --> sent request | {"jsonrpc":"2.0","id":48,"result":{"isIncomplete":false,"items":[{"label":"BSD benchmark","kind":1,"preselect":true,"detail":"BSD benchmark","insertText":"BSD benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":333,"character":21},"end":{"line":333,"character":21}}}]}]}} + +APP 2025-04-04T19:27:25.138Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:25.762Z --> received didChange | language: markdown | contentVersion: 262 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:25.830Z --> received didChange | language: markdown | contentVersion: 263 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:26.081Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":10,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":49} + +APP 2025-04-04T19:27:26.178Z --> received didChange | language: markdown | contentVersion: 264 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:26.184Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":11,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":50} + +APP 2025-04-04T19:27:26.385Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n### FreeBS \n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":264} + +APP 2025-04-04T19:27:26.385Z --> calling completion event + +APP 2025-04-04T19:27:26.385Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}}}] + +APP 2025-04-04T19:27:26.385Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:27:26.385Z --> copilot | completion request + +APP 2025-04-04T19:27:26.386Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:27:26.612Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:27:26.612Z --> completion hints: | D benchmark + +APP 2025-04-04T19:27:26.612Z --> sent request | {"jsonrpc":"2.0","id":50,"result":{"isIncomplete":false,"items":[{"label":"D benchmark","kind":1,"preselect":true,"detail":"D benchmark","insertText":"D benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":333,"character":22},"end":{"line":333,"character":22}}}]}]}} + +APP 2025-04-04T19:27:26.612Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:26.937Z --> received didChange | language: markdown | contentVersion: 265 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:27.135Z --> received didChange | language: markdown | contentVersion: 266 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:27.317Z --> received didChange | language: markdown | contentVersion: 267 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:27.322Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":12,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":51} + +APP 2025-04-04T19:27:27.448Z --> received didChange | language: markdown | contentVersion: 268 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:27.523Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n### FreeBSD \"\"\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":268} + +APP 2025-04-04T19:27:27.523Z --> skipping because content is stale + +APP 2025-04-04T19:27:27.523Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:27.523Z --> sent request | {"jsonrpc":"2.0","id":51,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:27:27.973Z --> received didChange | language: markdown | contentVersion: 269 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:28.897Z --> received didChange | language: markdown | contentVersion: 270 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:29.864Z --> received didChange | language: markdown | contentVersion: 271 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:29.870Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":14,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":52} + +APP 2025-04-04T19:27:30.039Z --> received didChange | language: markdown | contentVersion: 272 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:30.072Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n### FreeBSD @ B\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":272} + +APP 2025-04-04T19:27:30.072Z --> skipping because content is stale + +APP 2025-04-04T19:27:30.072Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:30.072Z --> sent request | {"jsonrpc":"2.0","id":52,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:27:30.351Z --> received didChange | language: markdown | contentVersion: 273 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:30.603Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":16,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":53} + +APP 2025-04-04T19:27:30.804Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n### FreeBSD @ Bh\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":273} + +APP 2025-04-04T19:27:30.804Z --> calling completion event + +APP 2025-04-04T19:27:30.804Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}}}] + +APP 2025-04-04T19:27:30.805Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":333,"character":0},"end":{"line":334,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:27:30.805Z --> copilot | completion request + +APP 2025-04-04T19:27:30.805Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:27:30.924Z --> received didChange | language: markdown | contentVersion: 274 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:31.015Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:27:31.016Z --> completion hints: | yve benchmark + +APP 2025-04-04T19:27:31.016Z --> sent request | {"jsonrpc":"2.0","id":53,"result":{"isIncomplete":false,"items":[{"label":"yve benchmark","kind":1,"preselect":true,"detail":"yve benchmark","insertText":"yve benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":333,"character":29},"end":{"line":333,"character":29}}}]}]}} + +APP 2025-04-04T19:27:31.016Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:31.017Z --> received didChange | language: markdown | contentVersion: 275 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:31.071Z --> received didChange | language: markdown | contentVersion: 276 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:31.180Z --> received didChange | language: markdown | contentVersion: 277 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:31.185Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":20,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":54} + +APP 2025-04-04T19:27:31.386Z --> received didChange | language: markdown | contentVersion: 278 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:31.387Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n### FreeBSD @ Bhyve b\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":278} + +APP 2025-04-04T19:27:31.387Z --> skipping because content is stale + +APP 2025-04-04T19:27:31.387Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:31.387Z --> sent request | {"jsonrpc":"2.0","id":54,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:27:31.455Z --> received didChange | language: markdown | contentVersion: 279 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:31.609Z --> received didChange | language: markdown | contentVersion: 280 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:31.743Z --> received didChange | language: markdown | contentVersion: 281 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:31.851Z --> received didChange | language: markdown | contentVersion: 282 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:32.102Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":25,"line":333},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":55} + +APP 2025-04-04T19:27:32.112Z --> received didChange | language: markdown | contentVersion: 283 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:32.155Z --> received didChange | language: markdown | contentVersion: 284 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:32.302Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n### FreeBSD @ Bhyve benchma\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":284} + +APP 2025-04-04T19:27:32.303Z --> skipping because content is stale + +APP 2025-04-04T19:27:32.303Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:32.303Z --> sent request | {"jsonrpc":"2.0","id":55,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:27:32.314Z --> received didChange | language: markdown | contentVersion: 285 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:32.408Z --> received didChange | language: markdown | contentVersion: 286 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:33.005Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:33.382Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:33.382Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:33.646Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:33.646Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:34.825Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:36.386Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:37.402Z --> received didChange | language: markdown | contentVersion: 287 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:40.805Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:27:49.819Z --> received didChange | language: markdown | contentVersion: 288 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:50.426Z --> received didChange | language: markdown | contentVersion: 289 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:51.633Z --> received didChange | language: markdown | contentVersion: 290 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:52.559Z --> received didChange | language: markdown | contentVersion: 291 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:52.961Z --> received didChange | language: markdown | contentVersion: 292 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:57.850Z --> received didChange | language: markdown | contentVersion: 293 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:58.023Z --> received didChange | language: markdown | contentVersion: 294 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:58.172Z --> received didChange | language: markdown | contentVersion: 295 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:58.686Z --> received didChange | language: markdown | contentVersion: 296 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:59.215Z --> received didChange | language: markdown | contentVersion: 297 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:59.221Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":1,"line":366},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":56} + +APP 2025-04-04T19:27:59.373Z --> received didChange | language: markdown | contentVersion: 298 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:59.378Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":366},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":57} + +APP 2025-04-04T19:27:59.580Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n##\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":298} + +APP 2025-04-04T19:27:59.580Z --> calling completion event + +APP 2025-04-04T19:27:59.580Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":366,"character":0},"end":{"line":367,"character":0}}}] + +APP 2025-04-04T19:27:59.580Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":366,"character":0},"end":{"line":367,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:27:59.580Z --> copilot | completion request + +APP 2025-04-04T19:27:59.581Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:27:59.632Z --> received didChange | language: markdown | contentVersion: 299 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:27:59.639Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":3,"line":366},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":58} + +APP 2025-04-04T19:27:59.840Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":299} + +APP 2025-04-04T19:27:59.841Z --> calling completion event + +APP 2025-04-04T19:27:59.841Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":366,"character":0},"end":{"line":367,"character":0}}}] + +APP 2025-04-04T19:27:59.841Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":366,"character":0},"end":{"line":367,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:27:59.841Z --> copilot | completion request + +APP 2025-04-04T19:27:59.841Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:27:59.842Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":366,"character":0},"end":{"line":367,"character":0}}}] + +APP 2025-04-04T19:27:59.842Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":366,"character":0},"end":{"line":367,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:28:00.352Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:28:00.353Z --> completion hints: | Conclusion + +APP 2025-04-04T19:28:00.353Z --> sent request | {"jsonrpc":"2.0","id":58,"result":{"isIncomplete":false,"items":[{"label":"Conclusion","kind":1,"preselect":true,"detail":"Conclusion","insertText":"Conclusion","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":366,"character":13},"end":{"line":366,"character":13}}}]}]}} + +APP 2025-04-04T19:28:00.353Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:28:09.581Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:28:09.843Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:28:09.843Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:03.961Z --> received didChange | language: markdown | contentVersion: 300 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:04.746Z --> received didChange | language: markdown | contentVersion: 301 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:05.587Z --> received didChange | language: markdown | contentVersion: 302 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:05.698Z --> received didChange | language: markdown | contentVersion: 303 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:05.809Z --> received didChange | language: markdown | contentVersion: 304 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:05.928Z --> received didChange | language: markdown | contentVersion: 305 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:06.050Z --> received didChange | language: markdown | contentVersion: 306 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:06.292Z --> received didChange | language: markdown | contentVersion: 307 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:06.343Z --> received didChange | language: markdown | contentVersion: 308 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:06.501Z --> received didChange | language: markdown | contentVersion: 309 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:06.588Z --> received didChange | language: markdown | contentVersion: 310 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:06.768Z --> received didChange | language: markdown | contentVersion: 311 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:06.799Z --> received didChange | language: markdown | contentVersion: 312 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:06.852Z --> received didChange | language: markdown | contentVersion: 313 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:06.998Z --> received didChange | language: markdown | contentVersion: 314 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:07.003Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":16,"line":366},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":59} + +APP 2025-04-04T19:32:07.145Z --> received didChange | language: markdown | contentVersion: 315 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:07.173Z --> received didChange | language: markdown | contentVersion: 316 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:07.205Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking wi\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":316} + +APP 2025-04-04T19:32:07.205Z --> skipping because content is stale + +APP 2025-04-04T19:32:07.205Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:07.205Z --> sent request | {"jsonrpc":"2.0","id":59,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:07.402Z --> received didChange | language: markdown | contentVersion: 317 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:07.488Z --> received didChange | language: markdown | contentVersion: 318 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:07.494Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":20,"line":366},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":60} + +APP 2025-04-04T19:32:07.695Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking wih \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":318} + +APP 2025-04-04T19:32:07.696Z --> calling completion event + +APP 2025-04-04T19:32:07.696Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":366,"character":0},"end":{"line":367,"character":0}}}] + +APP 2025-04-04T19:32:07.696Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":366,"character":0},"end":{"line":367,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:07.696Z --> copilot | completion request + +APP 2025-04-04T19:32:07.697Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:07.888Z --> received didChange | language: markdown | contentVersion: 319 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:08.050Z --> received didChange | language: markdown | contentVersion: 320 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:08.176Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:08.176Z --> completion hints: | `sysbench` + +APP 2025-04-04T19:32:08.177Z --> sent request | {"jsonrpc":"2.0","id":60,"result":{"isIncomplete":false,"items":[{"label":"`sysbench`","kind":1,"preselect":true,"detail":"`sysbench`","insertText":"`sysbench`","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":366,"character":30},"end":{"line":366,"character":30}}}]}]}} + +APP 2025-04-04T19:32:08.177Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:08.287Z --> received didChange | language: markdown | contentVersion: 321 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:08.385Z --> received didChange | language: markdown | contentVersion: 322 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:08.475Z --> received didChange | language: markdown | contentVersion: 323 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:08.480Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":21,"line":366},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":61} + +APP 2025-04-04T19:32:08.681Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":323} + +APP 2025-04-04T19:32:08.681Z --> calling completion event + +APP 2025-04-04T19:32:08.681Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":366,"character":0},"end":{"line":367,"character":0}}}] + +APP 2025-04-04T19:32:08.681Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":366,"character":0},"end":{"line":367,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:08.681Z --> copilot | completion request + +APP 2025-04-04T19:32:08.682Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:08.711Z --> received didChange | language: markdown | contentVersion: 324 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:08.965Z --> received didChange | language: markdown | contentVersion: 325 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:08.994Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:08.994Z --> completion hints: | `sysbench` + +APP 2025-04-04T19:32:08.994Z --> sent request | {"jsonrpc":"2.0","id":61,"result":{"isIncomplete":false,"items":[{"label":"`sysbench`","kind":1,"preselect":true,"detail":"`sysbench`","insertText":"`sysbench`","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":366,"character":31},"end":{"line":366,"character":31}}}]}]}} + +APP 2025-04-04T19:32:08.994Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:09.168Z --> received didChange | language: markdown | contentVersion: 326 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:09.228Z --> received didChange | language: markdown | contentVersion: 327 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:09.389Z --> received didChange | language: markdown | contentVersion: 328 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:09.512Z --> received didChange | language: markdown | contentVersion: 329 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:09.600Z --> received didChange | language: markdown | contentVersion: 330 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:11.251Z --> received didChange | language: markdown | contentVersion: 331 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:11.428Z --> received didChange | language: markdown | contentVersion: 332 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:14.093Z --> received didChange | language: markdown | contentVersion: 333 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:14.670Z --> received didChange | language: markdown | contentVersion: 334 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:15.224Z --> received didChange | language: markdown | contentVersion: 335 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:15.230Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":1,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":62} + +APP 2025-04-04T19:32:15.418Z --> received didChange | language: markdown | contentVersion: 336 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:15.424Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":63} + +APP 2025-04-04T19:32:15.595Z --> received didChange | language: markdown | contentVersion: 337 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:15.600Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":3,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":64} + +APP 2025-04-04T19:32:15.802Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n###\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":337} + +APP 2025-04-04T19:32:15.802Z --> calling completion event + +APP 2025-04-04T19:32:15.802Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:15.802Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:15.802Z --> copilot | completion request + +APP 2025-04-04T19:32:15.803Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:16.327Z --> received didChange | language: markdown | contentVersion: 338 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:16.467Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:16.468Z --> completion hints: | But I wanted to have a more scientific benchmark, so I decided to use `ubench`, which is a benchmarking tool that measures the performance of various CPU and memory operations. It provides a comprehensive set of tests to evaluate the performance of different hardware components. + +APP 2025-04-04T19:32:16.469Z --> sent request | {"jsonrpc":"2.0","id":64,"result":{"isIncomplete":false,"items":[{"label":"But I wanted to have a more scientific benchmark, so I decided to use `ubench`, which is a benchmarking tool that measures the performance of various CPU and memory operations. It provides a comprehensive set of tests to evaluate the performance of different hardware components.","kind":1,"preselect":true,"detail":"But I wanted to have a more scientific benchmark, so I decided to use `ubench`, which is a benchmarking tool that measures the performance of various CPU and memory operations. It provides a comprehensive set of tests to evaluate the performance of different hardware components.","insertText":"But I wanted to have a more scientific benchmark, so I decided to use `ubench`, which is a benchmarking tool that measures the performance of various CPU and memory operations. It provides a comprehensive set of tests to evaluate the performance of different hardware components.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":282},"end":{"line":368,"character":282}}}]}]}} + +APP 2025-04-04T19:32:16.469Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:16.544Z --> received didChange | language: markdown | contentVersion: 339 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:16.719Z --> received didChange | language: markdown | contentVersion: 340 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:17.696Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:18.681Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:20.571Z --> received didChange | language: markdown | contentVersion: 341 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:20.577Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":1,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":65} + +APP 2025-04-04T19:32:20.777Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n#\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":341} + +APP 2025-04-04T19:32:20.778Z --> calling completion event + +APP 2025-04-04T19:32:20.778Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:20.778Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:20.778Z --> copilot | completion request + +APP 2025-04-04T19:32:20.778Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:20.890Z --> received didChange | language: markdown | contentVersion: 342 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:20.895Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":66} + +APP 2025-04-04T19:32:21.088Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:21.089Z --> completion hints: | # ubench + +APP 2025-04-04T19:32:21.089Z --> sent request | {"jsonrpc":"2.0","id":65,"result":{"isIncomplete":false,"items":[{"label":"ubench","kind":1,"preselect":true,"detail":"ubench","insertText":"ubench","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":7},"end":{"line":368,"character":7}}}]}]}} + +APP 2025-04-04T19:32:21.089Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:21.096Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n##\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":342} + +APP 2025-04-04T19:32:21.097Z --> calling completion event + +APP 2025-04-04T19:32:21.097Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:21.097Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:21.097Z --> copilot | completion request + +APP 2025-04-04T19:32:21.097Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:21.248Z --> received didChange | language: markdown | contentVersion: 343 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:21.254Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":3,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":67} + +APP 2025-04-04T19:32:21.454Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n###\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":343} + +APP 2025-04-04T19:32:21.455Z --> calling completion event + +APP 2025-04-04T19:32:21.455Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:21.455Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:21.455Z --> copilot | completion request + +APP 2025-04-04T19:32:21.455Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:21.456Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:21.456Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:21.713Z --> received didChange | language: markdown | contentVersion: 344 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:21.719Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":68} + +APP 2025-04-04T19:32:21.920Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":344} + +APP 2025-04-04T19:32:21.920Z --> calling completion event + +APP 2025-04-04T19:32:21.920Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:21.920Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:21.920Z --> copilot | completion request + +APP 2025-04-04T19:32:21.920Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:21.921Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:21.921Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:22.307Z --> received didChange | language: markdown | contentVersion: 345 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:22.511Z --> failed to parse line: | failed to parse | Content-Length: 15119
+
+ + +APP 2025-04-04T19:32:22.512Z --> received didChange | language: markdown | contentVersion: 346 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:22.519Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:22.520Z --> completion hints: | FreeBSD + +APP 2025-04-04T19:32:22.520Z --> sent request | {"jsonrpc":"2.0","id":68,"result":{"isIncomplete":false,"items":[{"label":"FreeBSD","kind":1,"preselect":true,"detail":"FreeBSD","insertText":"FreeBSD","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":11},"end":{"line":368,"character":11}}}]}]}} + +APP 2025-04-04T19:32:22.520Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:22.563Z --> received didChange | language: markdown | contentVersion: 347 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:22.714Z --> received didChange | language: markdown | contentVersion: 348 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:22.950Z --> received didChange | language: markdown | contentVersion: 349 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:23.758Z --> received didChange | language: markdown | contentVersion: 350 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:24.017Z --> received didChange | language: markdown | contentVersion: 351 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:24.093Z --> received didChange | language: markdown | contentVersion: 352 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:24.280Z --> received didChange | language: markdown | contentVersion: 353 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:24.386Z --> received didChange | language: markdown | contentVersion: 354 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:24.392Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":12,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":69} + +APP 2025-04-04T19:32:24.480Z --> received didChange | language: markdown | contentVersion: 355 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:24.593Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### FreeBSD h\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":355} + +APP 2025-04-04T19:32:24.593Z --> skipping because content is stale + +APP 2025-04-04T19:32:24.593Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:24.593Z --> sent request | {"jsonrpc":"2.0","id":69,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:24.646Z --> received didChange | language: markdown | contentVersion: 356 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:24.704Z --> received didChange | language: markdown | contentVersion: 357 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:24.954Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":15,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":70} + +APP 2025-04-04T19:32:25.036Z --> received didChange | language: markdown | contentVersion: 358 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:25.156Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### FreeBSD host\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":358} + +APP 2025-04-04T19:32:25.156Z --> skipping because content is stale + +APP 2025-04-04T19:32:25.156Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:25.156Z --> sent request | {"jsonrpc":"2.0","id":70,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:25.391Z --> received didChange | language: markdown | contentVersion: 359 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:25.529Z --> received didChange | language: markdown | contentVersion: 360 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:25.802Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:26.058Z --> received didChange | language: markdown | contentVersion: 361 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:26.063Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":1,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":71} + +APP 2025-04-04T19:32:26.265Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### FreeBSD host\n\n#\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":361} + +APP 2025-04-04T19:32:26.265Z --> calling completion event + +APP 2025-04-04T19:32:26.266Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}}}] + +APP 2025-04-04T19:32:26.266Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:26.266Z --> copilot | completion request + +APP 2025-04-04T19:32:26.266Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:26.605Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:26.605Z --> completion hints: | # Install `ubench` on FreeBSD: + +APP 2025-04-04T19:32:26.606Z --> sent request | {"jsonrpc":"2.0","id":71,"result":{"isIncomplete":false,"items":[{"label":"Install `ubench` on FreeBSD:","kind":1,"preselect":true,"detail":"Install `ubench` on FreeBSD:","insertText":"Install `ubench` on FreeBSD:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":370,"character":29},"end":{"line":370,"character":29}}}]}]}} + +APP 2025-04-04T19:32:26.606Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:26.787Z --> received didChange | language: markdown | contentVersion: 362 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:27.352Z --> received didChange | language: markdown | contentVersion: 363 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:27.544Z --> received didChange | language: markdown | contentVersion: 364 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:28.149Z --> received didChange | language: markdown | contentVersion: 365 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:28.155Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":72} + +APP 2025-04-04T19:32:28.348Z --> received didChange | language: markdown | contentVersion: 366 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:28.353Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":3,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":73} + +APP 2025-04-04T19:32:28.554Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### FreeBSD host\n\n###\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":366} + +APP 2025-04-04T19:32:28.555Z --> calling completion event + +APP 2025-04-04T19:32:28.555Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}}}] + +APP 2025-04-04T19:32:28.555Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:28.555Z --> copilot | completion request + +APP 2025-04-04T19:32:28.555Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:28.672Z --> received didChange | language: markdown | contentVersion: 367 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:28.678Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":74} + +APP 2025-04-04T19:32:28.812Z --> received didChange | language: markdown | contentVersion: 368 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:28.857Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:28.858Z --> completion hints: | FreeBSD host: + +APP 2025-04-04T19:32:28.858Z --> sent request | {"jsonrpc":"2.0","id":73,"result":{"isIncomplete":false,"items":[{"label":"FreeBSD host:","kind":1,"preselect":true,"detail":"FreeBSD host:","insertText":"FreeBSD host:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":370,"character":16},"end":{"line":370,"character":16}}}]}]}} + +APP 2025-04-04T19:32:28.858Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:28.879Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### FreeBSD host\n\n### F\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":368} + +APP 2025-04-04T19:32:28.879Z --> skipping because content is stale + +APP 2025-04-04T19:32:28.879Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:28.879Z --> sent request | {"jsonrpc":"2.0","id":74,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:28.996Z --> received didChange | language: markdown | contentVersion: 369 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:29.027Z --> received didChange | language: markdown | contentVersion: 370 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:29.178Z --> received didChange | language: markdown | contentVersion: 371 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:29.398Z --> received didChange | language: markdown | contentVersion: 372 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:29.474Z --> received didChange | language: markdown | contentVersion: 373 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:29.670Z --> received didChange | language: markdown | contentVersion: 374 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:29.825Z --> received didChange | language: markdown | contentVersion: 375 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:29.832Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":12,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":75} + +APP 2025-04-04T19:32:30.033Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### FreeBSD host\n\n### FreeBSD \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":375} + +APP 2025-04-04T19:32:30.033Z --> calling completion event + +APP 2025-04-04T19:32:30.033Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}}}] + +APP 2025-04-04T19:32:30.033Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:30.033Z --> copilot | completion request + +APP 2025-04-04T19:32:30.034Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:30.292Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:30.293Z --> completion hints: | VM + +APP 2025-04-04T19:32:30.293Z --> sent request | {"jsonrpc":"2.0","id":75,"result":{"isIncomplete":false,"items":[{"label":"VM","kind":1,"preselect":true,"detail":"VM","insertText":"VM","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":370,"character":14},"end":{"line":370,"character":14}}}]}]}} + +APP 2025-04-04T19:32:30.293Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:30.778Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:30.912Z --> received didChange | language: markdown | contentVersion: 376 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:31.098Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:31.102Z --> received didChange | language: markdown | contentVersion: 377 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:31.323Z --> received didChange | language: markdown | contentVersion: 378 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:31.351Z --> received didChange | language: markdown | contentVersion: 379 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:31.393Z --> received didChange | language: markdown | contentVersion: 380 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:31.456Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:31.456Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:31.552Z --> received didChange | language: markdown | contentVersion: 381 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:31.557Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":18,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":76} + +APP 2025-04-04T19:32:31.735Z --> received didChange | language: markdown | contentVersion: 382 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:31.759Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### FreeBSD host\n\n### FreeBSD Bhyve g\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":382} + +APP 2025-04-04T19:32:31.759Z --> skipping because content is stale + +APP 2025-04-04T19:32:31.759Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:31.759Z --> sent request | {"jsonrpc":"2.0","id":76,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:31.893Z --> received didChange | language: markdown | contentVersion: 383 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:31.921Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:31.921Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:32.012Z --> received didChange | language: markdown | contentVersion: 384 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:32.221Z --> received didChange | language: markdown | contentVersion: 385 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:32.461Z --> received didChange | language: markdown | contentVersion: 386 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:32.712Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":23,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":77} + +APP 2025-04-04T19:32:32.914Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### FreeBSD host\n\n### FreeBSD Bhyve guest\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":386} + +APP 2025-04-04T19:32:32.914Z --> calling completion event + +APP 2025-04-04T19:32:32.914Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}}}] + +APP 2025-04-04T19:32:32.914Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:32.914Z --> copilot | completion request + +APP 2025-04-04T19:32:32.915Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:33.158Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:33.158Z --> completion hints: | + +APP 2025-04-04T19:32:33.158Z --> sent request | {"jsonrpc":"2.0","id":77,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":370,"character":23},"end":{"line":370,"character":23}}}]}]}} + +APP 2025-04-04T19:32:33.158Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:34.072Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":12,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":78} + +APP 2025-04-04T19:32:34.274Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### FreeBSD host\n\n### FreeBSD Bhyve guest\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":386} + +APP 2025-04-04T19:32:34.274Z --> calling completion event + +APP 2025-04-04T19:32:34.274Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}}}] + +APP 2025-04-04T19:32:34.274Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:34.274Z --> copilot | completion request + +APP 2025-04-04T19:32:34.274Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:34.592Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:34.592Z --> completion hints: | VM + +APP 2025-04-04T19:32:34.593Z --> sent request | {"jsonrpc":"2.0","id":78,"result":{"isIncomplete":false,"items":[{"label":"VM","kind":1,"preselect":true,"detail":"VM","insertText":"VM","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":370,"character":14},"end":{"line":370,"character":25}}}]}]}} + +APP 2025-04-04T19:32:34.593Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:36.267Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:37.085Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":12,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":79} + +APP 2025-04-04T19:32:37.209Z --> received didChange | language: markdown | contentVersion: 387 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:37.286Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### FreeBSD host\n\n### FreeBSD \"Bhyve guest\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":387} + +APP 2025-04-04T19:32:37.286Z --> skipping because content is stale + +APP 2025-04-04T19:32:37.286Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:37.286Z --> sent request | {"jsonrpc":"2.0","id":79,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:37.569Z --> received didChange | language: markdown | contentVersion: 388 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:37.780Z --> received didChange | language: markdown | contentVersion: 389 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:38.232Z --> received didChange | language: markdown | contentVersion: 390 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:38.239Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":14,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":80} + +APP 2025-04-04T19:32:38.440Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve guest\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":390} + +APP 2025-04-04T19:32:38.440Z --> calling completion event + +APP 2025-04-04T19:32:38.440Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}}}] + +APP 2025-04-04T19:32:38.440Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:38.440Z --> copilot | completion request + +APP 2025-04-04T19:32:38.440Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:38.556Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:38.556Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}}}] + +APP 2025-04-04T19:32:38.556Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:39.308Z --> received didChange | language: markdown | contentVersion: 391 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:40.034Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:40.992Z --> received didChange | language: markdown | contentVersion: 392 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:41.170Z --> received didChange | language: markdown | contentVersion: 393 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:41.373Z --> received didChange | language: markdown | contentVersion: 394 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:42.703Z --> received didChange | language: markdown | contentVersion: 395 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:42.915Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:43.158Z --> received didChange | language: markdown | contentVersion: 396 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:43.975Z --> received didChange | language: markdown | contentVersion: 397 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:44.154Z --> received didChange | language: markdown | contentVersion: 398 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:44.275Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:46.380Z --> received didChange | language: markdown | contentVersion: 399 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:46.512Z --> received didChange | language: markdown | contentVersion: 400 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:46.600Z --> received didChange | language: markdown | contentVersion: 401 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:46.730Z --> received didChange | language: markdown | contentVersion: 402 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:46.831Z --> received didChange | language: markdown | contentVersion: 403 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:46.851Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":5,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":81} + +APP 2025-04-04T19:32:46.911Z --> received didChange | language: markdown | contentVersion: 404 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:46.916Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":6,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":82} + +APP 2025-04-04T19:32:47.073Z --> received didChange | language: markdown | contentVersion: 405 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:47.118Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's r\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":405} + +APP 2025-04-04T19:32:47.118Z --> skipping because content is stale + +APP 2025-04-04T19:32:47.118Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:47.118Z --> sent request | {"jsonrpc":"2.0","id":82,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:47.156Z --> received didChange | language: markdown | contentVersion: 406 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:47.234Z --> received didChange | language: markdown | contentVersion: 407 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:47.322Z --> received didChange | language: markdown | contentVersion: 408 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:47.327Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":10,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":83} + +APP 2025-04-04T19:32:47.433Z --> received didChange | language: markdown | contentVersion: 409 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:47.520Z --> received didChange | language: markdown | contentVersion: 410 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:47.527Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run an\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":410} + +APP 2025-04-04T19:32:47.528Z --> skipping because content is stale + +APP 2025-04-04T19:32:47.528Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:47.528Z --> sent request | {"jsonrpc":"2.0","id":83,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:47.680Z --> received didChange | language: markdown | contentVersion: 411 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:47.776Z --> received didChange | language: markdown | contentVersion: 412 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:47.869Z --> received didChange | language: markdown | contentVersion: 413 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:47.930Z --> received didChange | language: markdown | contentVersion: 414 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:47.991Z --> received didChange | language: markdown | contentVersion: 415 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:48.106Z --> received didChange | language: markdown | contentVersion: 416 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:48.112Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":18,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":84} + +APP 2025-04-04T19:32:48.314Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another \n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":416} + +APP 2025-04-04T19:32:48.314Z --> calling completion event + +APP 2025-04-04T19:32:48.314Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:48.314Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:48.314Z --> copilot | completion request + +APP 2025-04-04T19:32:48.315Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:48.440Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:48.442Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:48.442Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:48.530Z --> received didChange | language: markdown | contentVersion: 417 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:48.557Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:48.735Z --> received didChange | language: markdown | contentVersion: 418 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:48.806Z --> received didChange | language: markdown | contentVersion: 419 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:48.812Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":19,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":85} + +APP 2025-04-04T19:32:48.972Z --> received didChange | language: markdown | contentVersion: 420 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:49.013Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, m\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":420} + +APP 2025-04-04T19:32:49.013Z --> skipping because content is stale + +APP 2025-04-04T19:32:49.013Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:49.013Z --> sent request | {"jsonrpc":"2.0","id":85,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:49.036Z --> received didChange | language: markdown | contentVersion: 421 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:49.100Z --> received didChange | language: markdown | contentVersion: 422 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:49.182Z --> received didChange | language: markdown | contentVersion: 423 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:49.227Z --> received didChange | language: markdown | contentVersion: 424 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:49.233Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":24,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":86} + +APP 2025-04-04T19:32:49.390Z --> received didChange | language: markdown | contentVersion: 425 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:49.434Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more s\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":425} + +APP 2025-04-04T19:32:49.434Z --> skipping because content is stale + +APP 2025-04-04T19:32:49.434Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:49.434Z --> sent request | {"jsonrpc":"2.0","id":86,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:49.471Z --> received didChange | language: markdown | contentVersion: 426 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:49.668Z --> received didChange | language: markdown | contentVersion: 427 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:49.886Z --> received didChange | language: markdown | contentVersion: 428 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:49.956Z --> received didChange | language: markdown | contentVersion: 429 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:50.110Z --> received didChange | language: markdown | contentVersion: 430 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:50.229Z --> received didChange | language: markdown | contentVersion: 431 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:50.355Z --> failed to parse line: | failed to parse | Content-Length: 15189
+
+ + +APP 2025-04-04T19:32:50.356Z --> received didChange | language: markdown | contentVersion: 432 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:50.606Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":32,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":87} + +APP 2025-04-04T19:32:50.733Z --> received didChange | language: markdown | contentVersion: 433 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:50.807Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitc\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":433} + +APP 2025-04-04T19:32:50.807Z --> skipping because content is stale + +APP 2025-04-04T19:32:50.807Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:50.807Z --> sent request | {"jsonrpc":"2.0","id":87,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:50.892Z --> received didChange | language: markdown | contentVersion: 434 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:51.086Z --> received didChange | language: markdown | contentVersion: 435 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:51.135Z --> received didChange | language: markdown | contentVersion: 436 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:51.263Z --> received didChange | language: markdown | contentVersion: 437 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:51.383Z --> received didChange | language: markdown | contentVersion: 438 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:51.476Z --> received didChange | language: markdown | contentVersion: 439 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:51.482Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":39,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":88} + +APP 2025-04-04T19:32:51.683Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, \n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":439} + +APP 2025-04-04T19:32:51.684Z --> calling completion event + +APP 2025-04-04T19:32:51.684Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:51.684Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:51.684Z --> copilot | completion request + +APP 2025-04-04T19:32:51.684Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:51.788Z --> received didChange | language: markdown | contentVersion: 440 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:51.871Z --> received didChange | language: markdown | contentVersion: 441 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:52.003Z --> received didChange | language: markdown | contentVersion: 442 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:52.060Z --> received didChange | language: markdown | contentVersion: 443 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:52.187Z --> received didChange | language: markdown | contentVersion: 444 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:52.377Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:52.379Z --> completion hints: | benchmarking tool called `ubench` to compare the performance of FreeBSD and Rocky Linux VMs running on Bhyve. This tool is available in the FreeBSD ports collection and can be installed with: + +APP 2025-04-04T19:32:52.379Z --> sent request | {"jsonrpc":"2.0","id":88,"result":{"isIncomplete":false,"items":[{"label":"benchmarking tool called `ubench` to compare the performance of FreeBSD and Rocky Linux VMs running on Bhyve. This tool is available in the FreeBSD ports collection and can be installed with:","kind":1,"preselect":true,"detail":"benchmarking tool called `ubench` to compare the performance of FreeBSD and Rocky Linux VMs running on Bhyve. This tool is available in the FreeBSD ports collection and can be installed with:","insertText":"benchmarking tool called `ubench` to compare the performance of FreeBSD and Rocky Linux VMs running on Bhyve. This tool is available in the FreeBSD ports collection and can be installed with:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":230},"end":{"line":368,"character":230}}}]}]}} + +APP 2025-04-04T19:32:52.379Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:52.394Z --> received didChange | language: markdown | contentVersion: 445 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:52.430Z --> received didChange | language: markdown | contentVersion: 446 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:52.600Z --> received didChange | language: markdown | contentVersion: 447 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:52.607Z --> received didChange | language: markdown | contentVersion: 448 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:53.126Z --> received didChange | language: markdown | contentVersion: 449 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:53.131Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":49,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":89} + +APP 2025-04-04T19:32:53.333Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmakr \n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":449} + +APP 2025-04-04T19:32:53.334Z --> calling completion event + +APP 2025-04-04T19:32:53.334Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:53.334Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:53.334Z --> copilot | completion request + +APP 2025-04-04T19:32:53.334Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:53.463Z --> received didChange | language: markdown | contentVersion: 450 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:53.646Z --> received didChange | language: markdown | contentVersion: 451 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:53.817Z --> received didChange | language: markdown | contentVersion: 452 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:53.909Z --> received didChange | language: markdown | contentVersion: 453 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:54.009Z --> received didChange | language: markdown | contentVersion: 454 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:54.095Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:54.096Z --> completion hints: | tool called `ubench` to compare the performance of FreeBSD and Rocky Linux VMs running on Bhyve. The `ubench` tool is a benchmarking tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a more comprehensive view of system performance compared to simple benchmarks. + +APP 2025-04-04T19:32:54.096Z --> sent request | {"jsonrpc":"2.0","id":89,"result":{"isIncomplete":false,"items":[{"label":"tool called `ubench` to compare the performance of FreeBSD and Rocky Linux VMs running on Bhyve. The `ubench` tool is a benchmarking tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a more comprehensive view of system performance compared to simple benchmarks.","kind":1,"preselect":true,"detail":"tool called `ubench` to compare the performance of FreeBSD and Rocky Linux VMs running on Bhyve. The `ubench` tool is a benchmarking tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a more comprehensive view of system performance compared to simple benchmarks.","insertText":"tool called `ubench` to compare the performance of FreeBSD and Rocky Linux VMs running on Bhyve. The `ubench` tool is a benchmarking tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a more comprehensive view of system performance compared to simple benchmarks.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":374},"end":{"line":368,"character":374}}}]}]}} + +APP 2025-04-04T19:32:54.096Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:54.127Z --> received didChange | language: markdown | contentVersion: 455 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:54.132Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":49,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":90} + +APP 2025-04-04T19:32:54.279Z --> received didChange | language: markdown | contentVersion: 456 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:54.333Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark u\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":456} + +APP 2025-04-04T19:32:54.333Z --> skipping because content is stale + +APP 2025-04-04T19:32:54.334Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:54.334Z --> sent request | {"jsonrpc":"2.0","id":90,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:54.369Z --> received didChange | language: markdown | contentVersion: 457 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:54.512Z --> received didChange | language: markdown | contentVersion: 458 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:54.564Z --> received didChange | language: markdown | contentVersion: 459 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:54.650Z --> received didChange | language: markdown | contentVersion: 460 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:54.711Z --> received didChange | language: markdown | contentVersion: 461 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:54.716Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":55,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":91} + +APP 2025-04-04T19:32:54.884Z --> received didChange | language: markdown | contentVersion: 462 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:54.918Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using ``\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":462} + +APP 2025-04-04T19:32:54.918Z --> skipping because content is stale + +APP 2025-04-04T19:32:54.918Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:54.918Z --> sent request | {"jsonrpc":"2.0","id":91,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:55.091Z --> received didChange | language: markdown | contentVersion: 463 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:55.505Z --> received didChange | language: markdown | contentVersion: 464 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:55.575Z --> received didChange | language: markdown | contentVersion: 465 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:55.735Z --> received didChange | language: markdown | contentVersion: 466 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:55.821Z --> received didChange | language: markdown | contentVersion: 467 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:55.914Z --> received didChange | language: markdown | contentVersion: 468 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:56.164Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":62,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":92} + +APP 2025-04-04T19:32:56.366Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":468} + +APP 2025-04-04T19:32:56.366Z --> calling completion event + +APP 2025-04-04T19:32:56.367Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:56.367Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:56.367Z --> copilot | completion request + +APP 2025-04-04T19:32:56.367Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:56.710Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:56.711Z --> completion hints: | `: + +APP 2025-04-04T19:32:56.711Z --> sent request | {"jsonrpc":"2.0","id":92,"result":{"isIncomplete":false,"items":[{"label":"`:","kind":1,"preselect":true,"detail":"`:","insertText":"`:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":64},"end":{"line":368,"character":65}}}]}]}} + +APP 2025-04-04T19:32:56.711Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:56.712Z --> received didChange | language: markdown | contentVersion: 469 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:56.749Z --> received didChange | language: markdown | contentVersion: 470 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:56.755Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":65,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":93} + +APP 2025-04-04T19:32:56.875Z --> received didChange | language: markdown | contentVersion: 471 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:56.957Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, t\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":471} + +APP 2025-04-04T19:32:56.957Z --> skipping because content is stale + +APP 2025-04-04T19:32:56.957Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:56.957Z --> sent request | {"jsonrpc":"2.0","id":93,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:32:56.973Z --> received didChange | language: markdown | contentVersion: 472 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:57.054Z --> received didChange | language: markdown | contentVersion: 473 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:57.165Z --> received didChange | language: markdown | contentVersion: 474 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:57.171Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":69,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":94} + +APP 2025-04-04T19:32:57.372Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the \n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":474} + +APP 2025-04-04T19:32:57.373Z --> calling completion event + +APP 2025-04-04T19:32:57.373Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:57.373Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:57.373Z --> copilot | completion request + +APP 2025-04-04T19:32:57.373Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:57.735Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:57.735Z --> completion hints: | universal benchmark tool. It is available for FreeBSD and Linux, so we can compare the results. + +APP 2025-04-04T19:32:57.735Z --> sent request | {"jsonrpc":"2.0","id":94,"result":{"isIncomplete":false,"items":[{"label":"universal benchmark tool. It is available for FreeBSD and Linux, so we can compare the results.","kind":1,"preselect":true,"detail":"universal benchmark tool. It is available for FreeBSD and Linux, so we can compare the results.","insertText":"universal benchmark tool. It is available for FreeBSD and Linux, so we can compare the results.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":164},"end":{"line":368,"character":164}}}]}]}} + +APP 2025-04-04T19:32:57.735Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:58.148Z --> received didChange | language: markdown | contentVersion: 475 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:58.217Z --> received didChange | language: markdown | contentVersion: 476 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:58.316Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:58.381Z --> received didChange | language: markdown | contentVersion: 477 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:58.443Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:58.479Z --> received didChange | language: markdown | contentVersion: 478 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:58.588Z --> received didChange | language: markdown | contentVersion: 479 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:58.595Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":74,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":95} + +APP 2025-04-04T19:32:58.797Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX \n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":479} + +APP 2025-04-04T19:32:58.797Z --> calling completion event + +APP 2025-04-04T19:32:58.797Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:32:58.797Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:32:58.797Z --> copilot | completion request + +APP 2025-04-04T19:32:58.798Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:32:58.812Z --> received didChange | language: markdown | contentVersion: 480 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:58.899Z --> received didChange | language: markdown | contentVersion: 481 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:59.038Z --> received didChange | language: markdown | contentVersion: 482 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:59.118Z --> received didChange | language: markdown | contentVersion: 483 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:59.168Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:32:59.168Z --> completion hints: | benchmarking tool. It is available in the FreeBSD ports collection and can be installed with `pkg install ubench`. + +APP 2025-04-04T19:32:59.169Z --> sent request | {"jsonrpc":"2.0","id":95,"result":{"isIncomplete":false,"items":[{"label":"benchmarking tool. It is available in the FreeBSD ports collection and can be installed with `pkg install ubench`.","kind":1,"preselect":true,"detail":"benchmarking tool. It is available in the FreeBSD ports collection and can be installed with `pkg install ubench`.","insertText":"benchmarking tool. It is available in the FreeBSD ports collection and can be installed with `pkg install ubench`.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":188},"end":{"line":368,"character":188}}}]}]}} + +APP 2025-04-04T19:32:59.169Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:59.214Z --> received didChange | language: markdown | contentVersion: 484 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:59.411Z --> received didChange | language: markdown | contentVersion: 485 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:59.456Z --> received didChange | language: markdown | contentVersion: 486 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:59.609Z --> received didChange | language: markdown | contentVersion: 487 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:59.661Z --> received didChange | language: markdown | contentVersion: 488 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:59.758Z --> received didChange | language: markdown | contentVersion: 489 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:59.763Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":84,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":96} + +APP 2025-04-04T19:32:59.916Z --> received didChange | language: markdown | contentVersion: 490 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:32:59.964Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark u\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":490} + +APP 2025-04-04T19:32:59.964Z --> skipping because content is stale + +APP 2025-04-04T19:32:59.964Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:32:59.964Z --> sent request | {"jsonrpc":"2.0","id":96,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:00.047Z --> received didChange | language: markdown | contentVersion: 491 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:00.149Z --> received didChange | language: markdown | contentVersion: 492 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:00.400Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":87,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":97} + +APP 2025-04-04T19:33:00.430Z --> received didChange | language: markdown | contentVersion: 493 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:00.602Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark util\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":493} + +APP 2025-04-04T19:33:00.602Z --> skipping because content is stale + +APP 2025-04-04T19:33:00.603Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:00.603Z --> sent request | {"jsonrpc":"2.0","id":97,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:00.741Z --> received didChange | language: markdown | contentVersion: 494 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:00.818Z --> failed to parse line: | failed to parse | Content-Length: 15247
+
+ + +APP 2025-04-04T19:33:00.819Z --> received didChange | language: markdown | contentVersion: 495 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:01.069Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":90,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":98} + +APP 2025-04-04T19:33:01.159Z --> received didChange | language: markdown | contentVersion: 496 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:01.270Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark utilt\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":496} + +APP 2025-04-04T19:33:01.270Z --> skipping because content is stale + +APP 2025-04-04T19:33:01.270Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:01.270Z --> sent request | {"jsonrpc":"2.0","id":98,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:01.425Z --> received didChange | language: markdown | contentVersion: 497 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:01.684Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:02.042Z --> received didChange | language: markdown | contentVersion: 498 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:02.143Z --> received didChange | language: markdown | contentVersion: 499 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:02.228Z --> received didChange | language: markdown | contentVersion: 500 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:02.485Z --> received didChange | language: markdown | contentVersion: 501 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:02.486Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":92,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":99} + +APP 2025-04-04T19:33:02.550Z --> received didChange | language: markdown | contentVersion: 502 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:02.556Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":93,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":100} + +APP 2025-04-04T19:33:02.575Z --> received didChange | language: markdown | contentVersion: 503 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:02.738Z --> received didChange | language: markdown | contentVersion: 504 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:02.757Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark utility, av\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":504} + +APP 2025-04-04T19:33:02.757Z --> skipping because content is stale + +APP 2025-04-04T19:33:02.757Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:02.757Z --> sent request | {"jsonrpc":"2.0","id":100,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:02.915Z --> received didChange | language: markdown | contentVersion: 505 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:02.946Z --> received didChange | language: markdown | contentVersion: 506 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:03.140Z --> received didChange | language: markdown | contentVersion: 507 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:03.199Z --> received didChange | language: markdown | contentVersion: 508 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:03.334Z --> received didChange | language: markdown | contentVersion: 509 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:03.334Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:03.552Z --> received didChange | language: markdown | contentVersion: 510 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:03.587Z --> failed to parse line: | failed to parse | Content-Length: 15259
+
+ + +APP 2025-04-04T19:33:03.588Z --> received didChange | language: markdown | contentVersion: 511 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:03.698Z --> received didChange | language: markdown | contentVersion: 512 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:03.704Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":103,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":101} + +APP 2025-04-04T19:33:03.905Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark utility, availabke \n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":512} + +APP 2025-04-04T19:33:03.906Z --> calling completion event + +APP 2025-04-04T19:33:03.906Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:33:03.906Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:33:03.906Z --> copilot | completion request + +APP 2025-04-04T19:33:03.906Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:33:04.066Z --> received didChange | language: markdown | contentVersion: 513 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:04.229Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:33:04.229Z --> completion hints: | on FreeBSD and Linux. It is a bit more scientific than the silly benchmark above. + +APP 2025-04-04T19:33:04.229Z --> sent request | {"jsonrpc":"2.0","id":101,"result":{"isIncomplete":false,"items":[{"label":"on FreeBSD and Linux. It is a bit more scientific than the silly benchmark above.","kind":1,"preselect":true,"detail":"on FreeBSD and Linux. It is a bit more scientific than the silly benchmark above.","insertText":"on FreeBSD and Linux. It is a bit more scientific than the silly benchmark above.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":184},"end":{"line":368,"character":184}}}]}]}} + +APP 2025-04-04T19:33:04.229Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:04.236Z --> received didChange | language: markdown | contentVersion: 514 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:04.417Z --> received didChange | language: markdown | contentVersion: 515 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:04.595Z --> received didChange | language: markdown | contentVersion: 516 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:05.215Z --> received didChange | language: markdown | contentVersion: 517 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:05.462Z --> received didChange | language: markdown | contentVersion: 518 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:05.564Z --> received didChange | language: markdown | contentVersion: 519 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:05.705Z --> received didChange | language: markdown | contentVersion: 520 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:05.710Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":103,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":102} + +APP 2025-04-04T19:33:05.910Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark utility, available \n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":520} + +APP 2025-04-04T19:33:05.910Z --> calling completion event + +APP 2025-04-04T19:33:05.911Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:33:05.911Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:33:05.911Z --> copilot | completion request + +APP 2025-04-04T19:33:05.911Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:33:06.310Z --> received didChange | language: markdown | contentVersion: 521 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:06.367Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:06.368Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:33:06.368Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:33:06.506Z --> received didChange | language: markdown | contentVersion: 522 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:06.528Z --> received didChange | language: markdown | contentVersion: 523 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:06.713Z --> received didChange | language: markdown | contentVersion: 524 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:06.794Z --> received didChange | language: markdown | contentVersion: 525 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:06.800Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":108,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":103} + +APP 2025-04-04T19:33:06.845Z --> received didChange | language: markdown | contentVersion: 526 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:06.954Z --> received didChange | language: markdown | contentVersion: 527 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:07.001Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark utility, available from th\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":527} + +APP 2025-04-04T19:33:07.001Z --> skipping because content is stale + +APP 2025-04-04T19:33:07.001Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:07.002Z --> sent request | {"jsonrpc":"2.0","id":103,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:07.023Z --> received didChange | language: markdown | contentVersion: 528 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:07.149Z --> received didChange | language: markdown | contentVersion: 529 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:07.154Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":112,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":104} + +APP 2025-04-04T19:33:07.277Z --> received didChange | language: markdown | contentVersion: 530 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:07.355Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark utility, available from the F\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":530} + +APP 2025-04-04T19:33:07.356Z --> skipping because content is stale + +APP 2025-04-04T19:33:07.356Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:07.356Z --> sent request | {"jsonrpc":"2.0","id":104,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:07.374Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:07.469Z --> received didChange | language: markdown | contentVersion: 531 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:07.530Z --> received didChange | language: markdown | contentVersion: 532 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:07.661Z --> received didChange | language: markdown | contentVersion: 533 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:07.831Z --> received didChange | language: markdown | contentVersion: 534 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:07.916Z --> received didChange | language: markdown | contentVersion: 535 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:08.088Z --> received didChange | language: markdown | contentVersion: 536 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:08.194Z --> received didChange | language: markdown | contentVersion: 537 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:08.200Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":120,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":105} + +APP 2025-04-04T19:33:08.318Z --> received didChange | language: markdown | contentVersion: 538 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:08.369Z --> received didChange | language: markdown | contentVersion: 539 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:08.391Z --> received didChange | language: markdown | contentVersion: 540 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:08.400Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark utility, available from the FreeBSD por\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":540} + +APP 2025-04-04T19:33:08.400Z --> skipping because content is stale + +APP 2025-04-04T19:33:08.400Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:08.400Z --> sent request | {"jsonrpc":"2.0","id":105,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:08.565Z --> received didChange | language: markdown | contentVersion: 541 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:08.780Z --> received didChange | language: markdown | contentVersion: 542 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:08.807Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:08.997Z --> received didChange | language: markdown | contentVersion: 543 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:09.032Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":126,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":106} + +APP 2025-04-04T19:33:09.032Z --> sent request | {"jsonrpc":"2.0","id":106,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:13.907Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:14.061Z --> received didChange | language: markdown | contentVersion: 544 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:14.207Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":103,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":107} + +APP 2025-04-04T19:33:14.399Z --> received didChange | language: markdown | contentVersion: 545 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:14.408Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark utility, available f\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":545} + +APP 2025-04-04T19:33:14.408Z --> skipping because content is stale + +APP 2025-04-04T19:33:14.408Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:14.408Z --> sent request | {"jsonrpc":"2.0","id":107,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:14.513Z --> received didChange | language: markdown | contentVersion: 546 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:14.618Z --> received didChange | language: markdown | contentVersion: 547 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:14.701Z --> received didChange | language: markdown | contentVersion: 548 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:14.707Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":107,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":108} + +APP 2025-04-04T19:33:14.822Z --> received didChange | language: markdown | contentVersion: 549 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:14.908Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark utility, available for F\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":549} + +APP 2025-04-04T19:33:14.908Z --> skipping because content is stale + +APP 2025-04-04T19:33:14.908Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:14.908Z --> sent request | {"jsonrpc":"2.0","id":108,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:15.005Z --> received didChange | language: markdown | contentVersion: 550 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:15.032Z --> received didChange | language: markdown | contentVersion: 551 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:15.195Z --> received didChange | language: markdown | contentVersion: 552 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:15.350Z --> received didChange | language: markdown | contentVersion: 553 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:15.436Z --> received didChange | language: markdown | contentVersion: 554 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:15.687Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":113,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":109} + +APP 2025-04-04T19:33:15.888Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisitcated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBS\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":554} + +APP 2025-04-04T19:33:15.889Z --> calling completion event + +APP 2025-04-04T19:33:15.889Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:33:15.889Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:33:15.889Z --> copilot | completion request + +APP 2025-04-04T19:33:15.890Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:33:15.912Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:15.941Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:33:15.941Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:33:16.081Z --> received didChange | language: markdown | contentVersion: 555 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:16.368Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:18.476Z --> received didChange | language: markdown | contentVersion: 556 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:21.560Z --> received didChange | language: markdown | contentVersion: 557 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:21.699Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":24,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":110} + +APP 2025-04-04T19:33:21.827Z --> received didChange | language: markdown | contentVersion: 558 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:21.900Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more s, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD.\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":558} + +APP 2025-04-04T19:33:21.901Z --> skipping because content is stale + +APP 2025-04-04T19:33:21.901Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:21.901Z --> sent request | {"jsonrpc":"2.0","id":110,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:21.971Z --> received didChange | language: markdown | contentVersion: 559 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:22.404Z --> received didChange | language: markdown | contentVersion: 560 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:22.632Z --> received didChange | language: markdown | contentVersion: 561 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:22.697Z --> received didChange | language: markdown | contentVersion: 562 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:22.802Z --> received didChange | language: markdown | contentVersion: 563 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:23.015Z --> received didChange | language: markdown | contentVersion: 564 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:23.103Z --> received didChange | language: markdown | contentVersion: 565 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:23.232Z --> received didChange | language: markdown | contentVersion: 566 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:23.380Z --> received didChange | language: markdown | contentVersion: 567 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:23.572Z --> failed to parse line: | failed to parse | Content-Length: 15270
+
+ + +APP 2025-04-04T19:33:23.573Z --> received didChange | language: markdown | contentVersion: 568 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:23.595Z --> received didChange | language: markdown | contentVersion: 569 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:23.753Z --> received didChange | language: markdown | contentVersion: 570 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:24.005Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":37,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":111} + +APP 2025-04-04T19:33:24.207Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD.\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":570} + +APP 2025-04-04T19:33:24.208Z --> calling completion event + +APP 2025-04-04T19:33:24.208Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:33:24.208Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:33:24.208Z --> copilot | completion request + +APP 2025-04-04T19:33:24.209Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:33:24.986Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:33:24.987Z --> completion hints: | benchmark. I found a tool called `ubench` that is available for FreeBSD and Linux. It is a bit more sophisticated than my silly benchmark, and it also provides a nice HTML report. + +APP 2025-04-04T19:33:24.987Z --> sent request | {"jsonrpc":"2.0","id":111,"result":{"isIncomplete":false,"items":[{"label":"benchmark. I found a tool called `ubench` that is available for FreeBSD and Linux. It is a bit more sophisticated than my silly benchmark, and it also provides a nice HTML report.","kind":1,"preselect":true,"detail":"benchmark. I found a tool called `ubench` that is available for FreeBSD and Linux. It is a bit more sophisticated than my silly benchmark, and it also provides a nice HTML report.","insertText":"benchmark. I found a tool called `ubench` that is available for FreeBSD and Linux. It is a bit more sophisticated than my silly benchmark, and it also provides a nice HTML report.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":216},"end":{"line":368,"character":294}}}]}]}} + +APP 2025-04-04T19:33:24.987Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:25.890Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:25.942Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:34.209Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:50.584Z --> received didChange | language: markdown | contentVersion: 571 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:50.755Z --> received didChange | language: markdown | contentVersion: 572 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:56.488Z --> received didChange | language: markdown | contentVersion: 573 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:56.495Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":116,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":112} + +APP 2025-04-04T19:33:56.683Z --> received didChange | language: markdown | contentVersion: 574 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:56.695Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. I\n\n\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":574} + +APP 2025-04-04T19:33:56.695Z --> skipping because content is stale + +APP 2025-04-04T19:33:56.695Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:56.695Z --> sent request | {"jsonrpc":"2.0","id":112,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:56.885Z --> failed to parse line: | failed to parse | Content-Length: 15279
+
+ + +APP 2025-04-04T19:33:56.886Z --> received didChange | language: markdown | contentVersion: 575 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:56.952Z --> received didChange | language: markdown | contentVersion: 576 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:56.958Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":119,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":113} + +APP 2025-04-04T19:33:57.159Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. It \n\n\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":576} + +APP 2025-04-04T19:33:57.160Z --> calling completion event + +APP 2025-04-04T19:33:57.160Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:33:57.160Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:33:57.160Z --> copilot | completion request + +APP 2025-04-04T19:33:57.160Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:33:57.844Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:33:57.845Z --> completion hints: | is a simple benchmarking tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a quick way to assess the overall performance of a system and can be useful for comparing different configurations or systems. + +APP 2025-04-04T19:33:57.845Z --> sent request | {"jsonrpc":"2.0","id":113,"result":{"isIncomplete":false,"items":[{"label":"is a simple benchmarking tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a quick way to assess the overall performance of a system and can be useful for comparing different configurations or systems.","kind":1,"preselect":true,"detail":"is a simple benchmarking tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a quick way to assess the overall performance of a system and can be useful for comparing different configurations or systems.","insertText":"is a simple benchmarking tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a quick way to assess the overall performance of a system and can be useful for comparing different configurations or systems.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":384},"end":{"line":368,"character":384}}}]}]}} + +APP 2025-04-04T19:33:57.845Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:59.101Z --> received didChange | language: markdown | contentVersion: 577 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:59.579Z --> received didChange | language: markdown | contentVersion: 578 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:59.640Z --> received didChange | language: markdown | contentVersion: 579 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:59.777Z --> received didChange | language: markdown | contentVersion: 580 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:59.782Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":123,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":114} + +APP 2025-04-04T19:33:59.909Z --> received didChange | language: markdown | contentVersion: 581 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:33:59.983Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. It was i\n\n\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":581} + +APP 2025-04-04T19:33:59.984Z --> skipping because content is stale + +APP 2025-04-04T19:33:59.984Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:33:59.984Z --> sent request | {"jsonrpc":"2.0","id":114,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:33:59.985Z --> received didChange | language: markdown | contentVersion: 582 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:00.013Z --> received didChange | language: markdown | contentVersion: 583 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:00.214Z --> received didChange | language: markdown | contentVersion: 584 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:00.403Z --> received didChange | language: markdown | contentVersion: 585 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:00.533Z --> received didChange | language: markdown | contentVersion: 586 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:00.703Z --> received didChange | language: markdown | contentVersion: 587 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:00.742Z --> received didChange | language: markdown | contentVersion: 588 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:00.822Z --> received didChange | language: markdown | contentVersion: 589 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:00.901Z --> received didChange | language: markdown | contentVersion: 590 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:00.907Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":133,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":115} + +APP 2025-04-04T19:34:01.045Z --> received didChange | language: markdown | contentVersion: 591 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:01.107Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. It was installed b\n\n\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":591} + +APP 2025-04-04T19:34:01.107Z --> skipping because content is stale + +APP 2025-04-04T19:34:01.108Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:01.108Z --> sent request | {"jsonrpc":"2.0","id":115,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:34:01.274Z --> received didChange | language: markdown | contentVersion: 592 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:01.431Z --> received didChange | language: markdown | contentVersion: 593 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:01.436Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":136,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":116} + +APP 2025-04-04T19:34:01.636Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. It was installed by \n\n\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":593} + +APP 2025-04-04T19:34:01.637Z --> calling completion event + +APP 2025-04-04T19:34:01.637Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:01.637Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:01.637Z --> copilot | completion request + +APP 2025-04-04T19:34:01.638Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:01.751Z --> received didChange | language: markdown | contentVersion: 594 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:01.857Z --> received didChange | language: markdown | contentVersion: 595 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:01.921Z --> received didChange | language: markdown | contentVersion: 596 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:02.145Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:34:02.146Z --> completion hints: | default with the FreeBSD system. It is a bit old, but it is still a good tool to get a rough idea of the performance of the system. + +APP 2025-04-04T19:34:02.146Z --> sent request | {"jsonrpc":"2.0","id":116,"result":{"isIncomplete":false,"items":[{"label":"default with the FreeBSD system. It is a bit old, but it is still a good tool to get a rough idea of the performance of the system.","kind":1,"preselect":true,"detail":"default with the FreeBSD system. It is a bit old, but it is still a good tool to get a rough idea of the performance of the system.","insertText":"default with the FreeBSD system. It is a bit old, but it is still a good tool to get a rough idea of the performance of the system.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":267},"end":{"line":368,"character":267}}}]}]}} + +APP 2025-04-04T19:34:02.146Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:02.158Z --> received didChange | language: markdown | contentVersion: 597 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:02.206Z --> received didChange | language: markdown | contentVersion: 598 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:02.456Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":141,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":117} + +APP 2025-04-04T19:34:02.505Z --> received didChange | language: markdown | contentVersion: 599 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:02.602Z --> received didChange | language: markdown | contentVersion: 600 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:02.607Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":143,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":118} + +APP 2025-04-04T19:34:02.808Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. It was installed by simply \n\n\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":600} + +APP 2025-04-04T19:34:02.808Z --> calling completion event + +APP 2025-04-04T19:34:02.808Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:02.809Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:02.809Z --> copilot | completion request + +APP 2025-04-04T19:34:02.809Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:03.003Z --> received didChange | language: markdown | contentVersion: 601 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:03.071Z --> received didChange | language: markdown | contentVersion: 602 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:03.288Z --> received didChange | language: markdown | contentVersion: 603 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:03.374Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:34:03.374Z --> completion hints: | running `pkg install ubench` on the FreeBSD host. The benchmark is a bit more sophisticated and runs a series of tests, including CPU, memory, and I/O performance. + +APP 2025-04-04T19:34:03.375Z --> sent request | {"jsonrpc":"2.0","id":118,"result":{"isIncomplete":false,"items":[{"label":"running `pkg install ubench` on the FreeBSD host. The benchmark is a bit more sophisticated and runs a series of tests, including CPU, memory, and I/O performance.","kind":1,"preselect":true,"detail":"running `pkg install ubench` on the FreeBSD host. The benchmark is a bit more sophisticated and runs a series of tests, including CPU, memory, and I/O performance.","insertText":"running `pkg install ubench` on the FreeBSD host. The benchmark is a bit more sophisticated and runs a series of tests, including CPU, memory, and I/O performance.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":306},"end":{"line":368,"character":306}}}]}]}} + +APP 2025-04-04T19:34:03.375Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:03.457Z --> received didChange | language: markdown | contentVersion: 604 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:03.569Z --> received didChange | language: markdown | contentVersion: 605 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:03.648Z --> received didChange | language: markdown | contentVersion: 606 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:03.695Z --> received didChange | language: markdown | contentVersion: 607 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:03.777Z --> received didChange | language: markdown | contentVersion: 608 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:03.783Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":151,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":119} + +APP 2025-04-04T19:34:03.943Z --> received didChange | language: markdown | contentVersion: 609 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:03.985Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. It was installed by simply running ``\n\n\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":609} + +APP 2025-04-04T19:34:03.985Z --> skipping because content is stale + +APP 2025-04-04T19:34:03.985Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:03.985Z --> sent request | {"jsonrpc":"2.0","id":119,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:34:04.523Z --> received didChange | language: markdown | contentVersion: 610 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:04.710Z --> received didChange | language: markdown | contentVersion: 611 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:04.767Z --> received didChange | language: markdown | contentVersion: 612 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:04.837Z --> received didChange | language: markdown | contentVersion: 613 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:04.908Z --> received didChange | language: markdown | contentVersion: 614 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:04.914Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":157,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":120} + +APP 2025-04-04T19:34:05.083Z --> received didChange | language: markdown | contentVersion: 615 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:05.115Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. It was installed by simply running `doas p`\n\n\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":615} + +APP 2025-04-04T19:34:05.115Z --> skipping because content is stale + +APP 2025-04-04T19:34:05.115Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:05.115Z --> sent request | {"jsonrpc":"2.0","id":120,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:34:05.306Z --> received didChange | language: markdown | contentVersion: 616 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:05.430Z --> received didChange | language: markdown | contentVersion: 617 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:05.681Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":160,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":121} + +APP 2025-04-04T19:34:05.882Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. It was installed by simply running `doas pkg`\n\n\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":617} + +APP 2025-04-04T19:34:05.882Z --> calling completion event + +APP 2025-04-04T19:34:05.882Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:05.882Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:05.882Z --> copilot | completion request + +APP 2025-04-04T19:34:05.883Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:05.980Z --> received didChange | language: markdown | contentVersion: 618 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:05.986Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":161,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":122} + +APP 2025-04-04T19:34:06.090Z --> received didChange | language: markdown | contentVersion: 619 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:06.137Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:34:06.138Z --> completion hints: | install ubench`. + +APP 2025-04-04T19:34:06.138Z --> sent request | {"jsonrpc":"2.0","id":121,"result":{"isIncomplete":false,"items":[{"label":"install ubench`.","kind":1,"preselect":true,"detail":"install ubench`.","insertText":"install ubench`.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":176},"end":{"line":368,"character":177}}}]}]}} + +APP 2025-04-04T19:34:06.138Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:06.160Z --> received didChange | language: markdown | contentVersion: 620 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:06.187Z --> received didChange | language: markdown | contentVersion: 621 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:06.187Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. It was installed by simply running `doas pkg ins`\n\n\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":621} + +APP 2025-04-04T19:34:06.187Z --> skipping because content is stale + +APP 2025-04-04T19:34:06.187Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:06.187Z --> sent request | {"jsonrpc":"2.0","id":122,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:34:06.411Z --> received didChange | language: markdown | contentVersion: 622 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:06.594Z --> received didChange | language: markdown | contentVersion: 623 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:06.671Z --> received didChange | language: markdown | contentVersion: 624 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:06.816Z --> received didChange | language: markdown | contentVersion: 625 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:06.877Z --> received didChange | language: markdown | contentVersion: 626 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:06.882Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":169,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":123} + +APP 2025-04-04T19:34:07.084Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. It was installed by simply running `doas pkg install `\n\n\n\n### FreeBSD host\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":626} + +APP 2025-04-04T19:34:07.084Z --> calling completion event + +APP 2025-04-04T19:34:07.084Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:07.085Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:07.085Z --> copilot | completion request + +APP 2025-04-04T19:34:07.085Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:07.100Z --> received didChange | language: markdown | contentVersion: 627 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:07.161Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:07.162Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:07.162Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:07.337Z --> received didChange | language: markdown | contentVersion: 628 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:07.490Z --> received didChange | language: markdown | contentVersion: 629 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:07.639Z --> received didChange | language: markdown | contentVersion: 630 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:07.753Z --> received didChange | language: markdown | contentVersion: 631 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:07.849Z --> received didChange | language: markdown | contentVersion: 632 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:11.637Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:12.204Z --> received didChange | language: markdown | contentVersion: 633 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:12.426Z --> received didChange | language: markdown | contentVersion: 634 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:12.808Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:12.934Z --> received didChange | language: markdown | contentVersion: 635 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:13.119Z --> received didChange | language: markdown | contentVersion: 636 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:13.260Z --> received didChange | language: markdown | contentVersion: 637 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:13.976Z --> received didChange | language: markdown | contentVersion: 638 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:14.610Z --> received didChange | language: markdown | contentVersion: 639 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:14.733Z --> received didChange | language: markdown | contentVersion: 640 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:14.984Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":5,"line":372},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":124} + +APP 2025-04-04T19:34:15.185Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the UNIX benchmark utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`\n\n### FreeBSD host\n\n```vm`\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":640} + +APP 2025-04-04T19:34:15.185Z --> calling completion event + +APP 2025-04-04T19:34:15.185Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":372,"character":0},"end":{"line":373,"character":0}}}] + +APP 2025-04-04T19:34:15.185Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":372,"character":0},"end":{"line":373,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:15.185Z --> copilot | completion request + +APP 2025-04-04T19:34:15.186Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:15.663Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:34:15.664Z --> completion hints: | + +APP 2025-04-04T19:34:15.664Z --> sent request | {"jsonrpc":"2.0","id":124,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":372,"character":5},"end":{"line":372,"character":6}}}]}]}} + +APP 2025-04-04T19:34:15.664Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:15.673Z --> received didChange | language: markdown | contentVersion: 641 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:15.861Z --> received didChange | language: markdown | contentVersion: 642 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:15.883Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:15.994Z --> received didChange | language: markdown | contentVersion: 643 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:16.094Z --> received didChange | language: markdown | contentVersion: 644 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:16.338Z --> received didChange | language: markdown | contentVersion: 645 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:17.085Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:17.163Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:17.928Z --> received didChange | language: markdown | contentVersion: 646 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:19.799Z --> received didChange | language: markdown | contentVersion: 647 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:21.297Z --> received didChange | language: markdown | contentVersion: 648 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:22.439Z --> received didChange | language: markdown | contentVersion: 649 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:25.186Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:27.439Z --> received didChange | language: markdown | contentVersion: 650 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:38.618Z --> received didChange | language: markdown | contentVersion: 651 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:38.759Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":69,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":125} + +APP 2025-04-04T19:34:38.960Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the benchmark utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":651} + +APP 2025-04-04T19:34:38.960Z --> calling completion event + +APP 2025-04-04T19:34:38.960Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:38.960Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:38.960Z --> copilot | completion request + +APP 2025-04-04T19:34:38.961Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:38.995Z --> received didChange | language: markdown | contentVersion: 652 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:39.070Z --> received didChange | language: markdown | contentVersion: 653 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:39.241Z --> received didChange | language: markdown | contentVersion: 654 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:39.360Z --> received didChange | language: markdown | contentVersion: 655 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:39.444Z --> received didChange | language: markdown | contentVersion: 656 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:39.450Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":74,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":126} + +APP 2025-04-04T19:34:39.623Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:34:39.624Z --> completion hints: | Unix Benchmark Utility. This is a simple benchmark tool that measures the performance of various system components, including CPU, memory, and I/O operations. It provides a quick way to assess the overall performance of a system. + +APP 2025-04-04T19:34:39.625Z --> sent request | {"jsonrpc":"2.0","id":125,"result":{"isIncomplete":false,"items":[{"label":"Unix Benchmark Utility. This is a simple benchmark tool that measures the performance of various system components, including CPU, memory, and I/O operations. It provides a quick way to assess the overall performance of a system.","kind":1,"preselect":true,"detail":"Unix Benchmark Utility. This is a simple benchmark tool that measures the performance of various system components, including CPU, memory, and I/O operations. It provides a quick way to assess the overall performance of a system.","insertText":"Unix Benchmark Utility. This is a simple benchmark tool that measures the performance of various system components, including CPU, memory, and I/O operations. It provides a quick way to assess the overall performance of a system.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":298},"end":{"line":368,"character":400}}}]}]}} + +APP 2025-04-04T19:34:39.625Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:39.650Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix benchmark utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":656} + +APP 2025-04-04T19:34:39.651Z --> calling completion event + +APP 2025-04-04T19:34:39.651Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:39.651Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:39.651Z --> copilot | completion request + +APP 2025-04-04T19:34:39.651Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:40.135Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:34:40.136Z --> completion hints: | Benchmark Utility. This is a simple CPU benchmark that runs on FreeBSD and Linux. It is available in the FreeBSD ports collection and can be installed with `pkg install ubench`. + +APP 2025-04-04T19:34:40.136Z --> sent request | {"jsonrpc":"2.0","id":126,"result":{"isIncomplete":false,"items":[{"label":"Benchmark Utility. This is a simple CPU benchmark that runs on FreeBSD and Linux. It is available in the FreeBSD ports collection and can be installed with `pkg install ubench`.","kind":1,"preselect":true,"detail":"Benchmark Utility. This is a simple CPU benchmark that runs on FreeBSD and Linux. It is available in the FreeBSD ports collection and can be installed with `pkg install ubench`.","insertText":"Benchmark Utility. This is a simple CPU benchmark that runs on FreeBSD and Linux. It is available in the FreeBSD ports collection and can be installed with `pkg install ubench`.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":251},"end":{"line":368,"character":353}}}]}]}} + +APP 2025-04-04T19:34:40.136Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:43.433Z --> received didChange | language: markdown | contentVersion: 657 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:44.644Z --> failed to parse line: | failed to parse | Content-Length: 15815
+
+ + +APP 2025-04-04T19:34:44.644Z --> received didChange | language: markdown | contentVersion: 658 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:46.787Z --> received didChange | language: markdown | contentVersion: 659 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:46.856Z --> received didChange | language: markdown | contentVersion: 660 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:46.861Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":178,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":127} + +APP 2025-04-04T19:34:47.062Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":660} + +APP 2025-04-04T19:34:47.063Z --> calling completion event + +APP 2025-04-04T19:34:47.063Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:47.063Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:47.063Z --> copilot | completion request + +APP 2025-04-04T19:34:47.063Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:47.214Z --> received didChange | language: markdown | contentVersion: 661 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:47.303Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:34:47.303Z --> completion hints: | + +APP 2025-04-04T19:34:47.303Z --> sent request | {"jsonrpc":"2.0","id":127,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":178},"end":{"line":368,"character":178}}}]}]}} + +APP 2025-04-04T19:34:47.303Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:47.395Z --> received didChange | language: markdown | contentVersion: 662 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:47.540Z --> received didChange | language: markdown | contentVersion: 663 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:47.545Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":181,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":128} + +APP 2025-04-04T19:34:47.747Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":663} + +APP 2025-04-04T19:34:47.748Z --> calling completion event + +APP 2025-04-04T19:34:47.748Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:47.748Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:47.748Z --> copilot | completion request + +APP 2025-04-04T19:34:47.748Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:47.936Z --> received didChange | language: markdown | contentVersion: 664 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:48.076Z --> received didChange | language: markdown | contentVersion: 665 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:48.154Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:34:48.155Z --> completion hints: | is a simple benchmark tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a quick way to assess the overall performance of a Unix-like operating system. + +APP 2025-04-04T19:34:48.155Z --> sent request | {"jsonrpc":"2.0","id":128,"result":{"isIncomplete":false,"items":[{"label":"is a simple benchmark tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a quick way to assess the overall performance of a Unix-like operating system.","kind":1,"preselect":true,"detail":"is a simple benchmark tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a quick way to assess the overall performance of a Unix-like operating system.","insertText":"is a simple benchmark tool that measures the performance of various system components, including CPU, memory, and disk I/O. It provides a quick way to assess the overall performance of a Unix-like operating system.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":395},"end":{"line":368,"character":395}}}]}]}} + +APP 2025-04-04T19:34:48.155Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:48.961Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:49.123Z --> received didChange | language: markdown | contentVersion: 666 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:49.304Z --> received didChange | language: markdown | contentVersion: 667 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:49.508Z --> received didChange | language: markdown | contentVersion: 668 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:49.651Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:49.694Z --> received didChange | language: markdown | contentVersion: 669 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:49.755Z --> received didChange | language: markdown | contentVersion: 670 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:49.851Z --> received didChange | language: markdown | contentVersion: 671 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:49.857Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":185,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":129} + +APP 2025-04-04T19:34:49.953Z --> received didChange | language: markdown | contentVersion: 672 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:50.022Z --> received didChange | language: markdown | contentVersion: 673 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:50.058Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can be\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":673} + +APP 2025-04-04T19:34:50.059Z --> skipping because content is stale + +APP 2025-04-04T19:34:50.059Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:50.059Z --> sent request | {"jsonrpc":"2.0","id":129,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:34:50.156Z --> received didChange | language: markdown | contentVersion: 674 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:50.225Z --> received didChange | language: markdown | contentVersion: 675 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:50.338Z --> received didChange | language: markdown | contentVersion: 676 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:50.589Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":190,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":130} + +APP 2025-04-04T19:34:50.670Z --> received didChange | language: markdown | contentVersion: 677 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:50.760Z --> received didChange | language: markdown | contentVersion: 678 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:50.791Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchma\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":678} + +APP 2025-04-04T19:34:50.791Z --> skipping because content is stale + +APP 2025-04-04T19:34:50.791Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:50.791Z --> sent request | {"jsonrpc":"2.0","id":130,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:34:51.004Z --> received didChange | language: markdown | contentVersion: 679 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:51.068Z --> received didChange | language: markdown | contentVersion: 680 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:51.072Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":194,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":131} + +APP 2025-04-04T19:34:51.274Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmak \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":680} + +APP 2025-04-04T19:34:51.275Z --> calling completion event + +APP 2025-04-04T19:34:51.275Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:51.275Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:51.275Z --> copilot | completion request + +APP 2025-04-04T19:34:51.275Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:51.387Z --> received didChange | language: markdown | contentVersion: 681 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:51.521Z --> received didChange | language: markdown | contentVersion: 682 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:51.577Z --> received didChange | language: markdown | contentVersion: 683 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:51.707Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:34:51.708Z --> completion hints: | various things like CPU, memory, and disk I/O. It is available for FreeBSD and Linux, so we can compare the results. + +APP 2025-04-04T19:34:51.708Z --> sent request | {"jsonrpc":"2.0","id":131,"result":{"isIncomplete":false,"items":[{"label":"various things like CPU, memory, and disk I/O. It is available for FreeBSD and Linux, so we can compare the results.","kind":1,"preselect":true,"detail":"various things like CPU, memory, and disk I/O. It is available for FreeBSD and Linux, so we can compare the results.","insertText":"various things like CPU, memory, and disk I/O. It is available for FreeBSD and Linux, so we can compare the results.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":310},"end":{"line":368,"character":310}}}]}]}} + +APP 2025-04-04T19:34:51.708Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:51.710Z --> received didChange | language: markdown | contentVersion: 684 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:51.813Z --> received didChange | language: markdown | contentVersion: 685 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:51.819Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":195,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":132} + +APP 2025-04-04T19:34:51.930Z --> received didChange | language: markdown | contentVersion: 686 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:52.021Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark C\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":686} + +APP 2025-04-04T19:34:52.021Z --> skipping because content is stale + +APP 2025-04-04T19:34:52.021Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:52.021Z --> sent request | {"jsonrpc":"2.0","id":132,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:34:52.083Z --> received didChange | language: markdown | contentVersion: 687 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:52.317Z --> received didChange | language: markdown | contentVersion: 688 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:52.401Z --> received didChange | language: markdown | contentVersion: 689 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:52.407Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":199,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":133} + +APP 2025-04-04T19:34:52.492Z --> received didChange | language: markdown | contentVersion: 690 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:52.577Z --> received didChange | language: markdown | contentVersion: 691 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:52.608Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":691} + +APP 2025-04-04T19:34:52.608Z --> skipping because content is stale + +APP 2025-04-04T19:34:52.608Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:52.609Z --> sent request | {"jsonrpc":"2.0","id":133,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:34:52.657Z --> received didChange | language: markdown | contentVersion: 692 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:52.663Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":202,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":134} + +APP 2025-04-04T19:34:52.755Z --> received didChange | language: markdown | contentVersion: 693 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:52.864Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as w\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":693} + +APP 2025-04-04T19:34:52.864Z --> skipping because content is stale + +APP 2025-04-04T19:34:52.864Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:52.864Z --> sent request | {"jsonrpc":"2.0","id":134,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:34:52.913Z --> received didChange | language: markdown | contentVersion: 694 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:52.997Z --> received didChange | language: markdown | contentVersion: 695 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:53.171Z --> received didChange | language: markdown | contentVersion: 696 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:53.207Z --> received didChange | language: markdown | contentVersion: 697 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:53.212Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":207,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":135} + +APP 2025-04-04T19:34:53.286Z --> received didChange | language: markdown | contentVersion: 698 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:53.365Z --> received didChange | language: markdown | contentVersion: 699 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:53.414Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":699} + +APP 2025-04-04T19:34:53.414Z --> skipping because content is stale + +APP 2025-04-04T19:34:53.414Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:53.414Z --> sent request | {"jsonrpc":"2.0","id":135,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:34:53.499Z --> received didChange | language: markdown | contentVersion: 700 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:53.504Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":210,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":136} + +APP 2025-04-04T19:34:53.694Z --> received didChange | language: markdown | contentVersion: 701 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:53.705Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as m\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":701} + +APP 2025-04-04T19:34:53.705Z --> skipping because content is stale + +APP 2025-04-04T19:34:53.706Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:53.706Z --> sent request | {"jsonrpc":"2.0","id":136,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:34:53.746Z --> received didChange | language: markdown | contentVersion: 702 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:53.858Z --> received didChange | language: markdown | contentVersion: 703 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:53.986Z --> received didChange | language: markdown | contentVersion: 704 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:54.110Z --> received didChange | language: markdown | contentVersion: 705 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:54.206Z --> received didChange | language: markdown | contentVersion: 706 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:54.302Z --> received didChange | language: markdown | contentVersion: 707 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:54.307Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":217,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":137} + +APP 2025-04-04T19:34:54.508Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":707} + +APP 2025-04-04T19:34:54.509Z --> calling completion event + +APP 2025-04-04T19:34:54.509Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:54.509Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:54.509Z --> copilot | completion request + +APP 2025-04-04T19:34:54.509Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:54.537Z --> received didChange | language: markdown | contentVersion: 708 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:54.570Z --> received didChange | language: markdown | contentVersion: 709 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:54.656Z --> received didChange | language: markdown | contentVersion: 710 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:54.796Z --> received didChange | language: markdown | contentVersion: 711 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:54.827Z --> received didChange | language: markdown | contentVersion: 712 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:54.883Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:34:54.884Z --> completion hints: | and disk performance. It is available for FreeBSD and Linux, so we can compare the results. + +APP 2025-04-04T19:34:54.884Z --> sent request | {"jsonrpc":"2.0","id":137,"result":{"isIncomplete":false,"items":[{"label":"and disk performance. It is available for FreeBSD and Linux, so we can compare the results.","kind":1,"preselect":true,"detail":"and disk performance. It is available for FreeBSD and Linux, so we can compare the results.","insertText":"and disk performance. It is available for FreeBSD and Linux, so we can compare the results.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":308},"end":{"line":368,"character":308}}}]}]}} + +APP 2025-04-04T19:34:54.884Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:54.949Z --> received didChange | language: markdown | contentVersion: 713 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:55.044Z --> received didChange | language: markdown | contentVersion: 714 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:55.129Z --> received didChange | language: markdown | contentVersion: 715 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:55.236Z --> received didChange | language: markdown | contentVersion: 716 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:55.334Z --> received didChange | language: markdown | contentVersion: 717 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:55.366Z --> received didChange | language: markdown | contentVersion: 718 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:55.491Z --> received didChange | language: markdown | contentVersion: 719 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:55.553Z --> received didChange | language: markdown | contentVersion: 720 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:34:55.558Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":230,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":138} + +APP 2025-04-04T19:34:55.760Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":720} + +APP 2025-04-04T19:34:55.760Z --> calling completion event + +APP 2025-04-04T19:34:55.760Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:34:55.760Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:34:55.760Z --> copilot | completion request + +APP 2025-04-04T19:34:55.761Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:34:56.007Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:34:56.007Z --> completion hints: | + +APP 2025-04-04T19:34:56.007Z --> sent request | {"jsonrpc":"2.0","id":138,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":230},"end":{"line":368,"character":230}}}]}]}} + +APP 2025-04-04T19:34:56.007Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:57.064Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:34:57.749Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:00.387Z --> received didChange | language: markdown | contentVersion: 721 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:00.574Z --> received didChange | language: markdown | contentVersion: 722 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:00.692Z --> received didChange | language: markdown | contentVersion: 723 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:00.760Z --> received didChange | language: markdown | contentVersion: 724 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:00.883Z --> received didChange | language: markdown | contentVersion: 725 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:00.968Z --> received didChange | language: markdown | contentVersion: 726 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:00.973Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":236,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":139} + +APP 2025-04-04T19:35:01.174Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":726} + +APP 2025-04-04T19:35:01.174Z --> calling completion event + +APP 2025-04-04T19:35:01.174Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:01.174Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:01.174Z --> copilot | completion request + +APP 2025-04-04T19:35:01.175Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:01.275Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:01.276Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:01.276Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:01.278Z --> received didChange | language: markdown | contentVersion: 727 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:01.487Z --> received didChange | language: markdown | contentVersion: 728 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:01.492Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":238,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":140} + +APP 2025-04-04T19:35:01.693Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":728} + +APP 2025-04-04T19:35:01.693Z --> calling completion event + +APP 2025-04-04T19:35:01.693Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:01.693Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:01.693Z --> copilot | completion request + +APP 2025-04-04T19:35:01.693Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:02.023Z --> received didChange | language: markdown | contentVersion: 729 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:02.107Z --> received didChange | language: markdown | contentVersion: 730 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:02.356Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:35:02.357Z --> completion hints: | am only showing the CPU benchmark results. The memory benchmark is not really comparable between FreeBSD and Linux, as they use different memory management systems. + +APP 2025-04-04T19:35:02.357Z --> sent request | {"jsonrpc":"2.0","id":140,"result":{"isIncomplete":false,"items":[{"label":"am only showing the CPU benchmark results. The memory benchmark is not really comparable between FreeBSD and Linux, as they use different memory management systems.","kind":1,"preselect":true,"detail":"am only showing the CPU benchmark results. The memory benchmark is not really comparable between FreeBSD and Linux, as they use different memory management systems.","insertText":"am only showing the CPU benchmark results. The memory benchmark is not really comparable between FreeBSD and Linux, as they use different memory management systems.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":402},"end":{"line":368,"character":402}}}]}]}} + +APP 2025-04-04T19:35:02.357Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:02.415Z --> received didChange | language: markdown | contentVersion: 731 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:02.479Z --> received didChange | language: markdown | contentVersion: 732 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:02.616Z --> received didChange | language: markdown | contentVersion: 733 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:02.717Z --> received didChange | language: markdown | contentVersion: 734 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:02.723Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":244,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":141} + +APP 2025-04-04T19:35:02.771Z --> received didChange | language: markdown | contentVersion: 735 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:02.882Z --> received didChange | language: markdown | contentVersion: 736 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:02.924Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":736} + +APP 2025-04-04T19:35:02.924Z --> skipping because content is stale + +APP 2025-04-04T19:35:02.924Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:02.924Z --> sent request | {"jsonrpc":"2.0","id":141,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:02.968Z --> received didChange | language: markdown | contentVersion: 737 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:02.973Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":247,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":142} + +APP 2025-04-04T19:35:03.091Z --> received didChange | language: markdown | contentVersion: 738 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:03.173Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it t\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":738} + +APP 2025-04-04T19:35:03.174Z --> skipping because content is stale + +APP 2025-04-04T19:35:03.174Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:03.174Z --> sent request | {"jsonrpc":"2.0","id":142,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:03.197Z --> received didChange | language: markdown | contentVersion: 739 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:03.272Z --> received didChange | language: markdown | contentVersion: 740 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:03.278Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":250,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":143} + +APP 2025-04-04T19:35:03.479Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":740} + +APP 2025-04-04T19:35:03.480Z --> calling completion event + +APP 2025-04-04T19:35:03.480Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:03.480Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:03.480Z --> copilot | completion request + +APP 2025-04-04T19:35:03.481Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:03.510Z --> received didChange | language: markdown | contentVersion: 741 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:03.701Z --> received didChange | language: markdown | contentVersion: 742 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:03.738Z --> received didChange | language: markdown | contentVersion: 743 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:03.901Z --> received didChange | language: markdown | contentVersion: 744 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:03.906Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":254,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":144} + +APP 2025-04-04T19:35:03.995Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:35:03.996Z --> completion hints: | 1 second of testing time. The results are not comparable to the silly benchmark, but they give a good overview of the performance of the CPU and memory. + +APP 2025-04-04T19:35:03.996Z --> sent request | {"jsonrpc":"2.0","id":143,"result":{"isIncomplete":false,"items":[{"label":"1 second of testing time. The results are not comparable to the silly benchmark, but they give a good overview of the performance of the CPU and memory.","kind":1,"preselect":true,"detail":"1 second of testing time. The results are not comparable to the silly benchmark, but they give a good overview of the performance of the CPU and memory.","insertText":"1 second of testing time. The results are not comparable to the silly benchmark, but they give a good overview of the performance of the CPU and memory.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":402},"end":{"line":368,"character":402}}}]}]}} + +APP 2025-04-04T19:35:03.996Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:04.109Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":744} + +APP 2025-04-04T19:35:04.109Z --> calling completion event + +APP 2025-04-04T19:35:04.109Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:04.109Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:04.109Z --> copilot | completion request + +APP 2025-04-04T19:35:04.110Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:04.233Z --> received didChange | language: markdown | contentVersion: 745 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:04.333Z --> received didChange | language: markdown | contentVersion: 746 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:04.404Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:35:04.405Z --> completion hints: | CPU core to make it comparable to the previous benchmarks. + +APP 2025-04-04T19:35:04.405Z --> sent request | {"jsonrpc":"2.0","id":144,"result":{"isIncomplete":false,"items":[{"label":"CPU core to make it comparable to the previous benchmarks.","kind":1,"preselect":true,"detail":"CPU core to make it comparable to the previous benchmarks.","insertText":"CPU core to make it comparable to the previous benchmarks.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":312},"end":{"line":368,"character":312}}}]}]}} + +APP 2025-04-04T19:35:04.405Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:04.510Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:04.561Z --> received didChange | language: markdown | contentVersion: 747 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:04.836Z --> received didChange | language: markdown | contentVersion: 748 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:04.882Z --> received didChange | language: markdown | contentVersion: 749 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:04.888Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":259,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":145} + +APP 2025-04-04T19:35:04.939Z --> received didChange | language: markdown | contentVersion: 750 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:05.002Z --> received didChange | language: markdown | contentVersion: 751 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:05.090Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":751} + +APP 2025-04-04T19:35:05.090Z --> skipping because content is stale + +APP 2025-04-04T19:35:05.091Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:05.091Z --> sent request | {"jsonrpc":"2.0","id":145,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:05.115Z --> received didChange | language: markdown | contentVersion: 752 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:05.121Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":262,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":146} + +APP 2025-04-04T19:35:05.267Z --> received didChange | language: markdown | contentVersion: 753 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:05.318Z --> received didChange | language: markdown | contentVersion: 754 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:05.321Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as th\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":754} + +APP 2025-04-04T19:35:05.321Z --> skipping because content is stale + +APP 2025-04-04T19:35:05.321Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:05.321Z --> sent request | {"jsonrpc":"2.0","id":146,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:05.421Z --> received didChange | language: markdown | contentVersion: 755 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:05.559Z --> received didChange | language: markdown | contentVersion: 756 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:05.565Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":266,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":147} + +APP 2025-04-04T19:35:05.761Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:05.765Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":756} + +APP 2025-04-04T19:35:05.765Z --> calling completion event + +APP 2025-04-04T19:35:05.765Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:05.765Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:05.765Z --> copilot | completion request + +APP 2025-04-04T19:35:05.766Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:05.824Z --> received didChange | language: markdown | contentVersion: 757 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:05.975Z --> received didChange | language: markdown | contentVersion: 758 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:06.042Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:35:06.042Z --> completion hints: | other tests are not really multi-threaded anyway. + +APP 2025-04-04T19:35:06.043Z --> sent request | {"jsonrpc":"2.0","id":147,"result":{"isIncomplete":false,"items":[{"label":"other tests are not really multi-threaded anyway.","kind":1,"preselect":true,"detail":"other tests are not really multi-threaded anyway.","insertText":"other tests are not really multi-threaded anyway.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":315},"end":{"line":368,"character":315}}}]}]}} + +APP 2025-04-04T19:35:06.043Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:06.969Z --> received didChange | language: markdown | contentVersion: 759 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:06.975Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":269,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":148} + +APP 2025-04-04T19:35:07.162Z --> received didChange | language: markdown | contentVersion: 760 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:07.177Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM h\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":760} + +APP 2025-04-04T19:35:07.177Z --> skipping because content is stale + +APP 2025-04-04T19:35:07.177Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:07.177Z --> sent request | {"jsonrpc":"2.0","id":148,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:07.206Z --> received didChange | language: markdown | contentVersion: 761 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:07.267Z --> received didChange | language: markdown | contentVersion: 762 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:07.347Z --> received didChange | language: markdown | contentVersion: 763 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:07.353Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":273,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":149} + +APP 2025-04-04T19:35:07.501Z --> received didChange | language: markdown | contentVersion: 764 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:07.554Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has g\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":764} + +APP 2025-04-04T19:35:07.554Z --> skipping because content is stale + +APP 2025-04-04T19:35:07.554Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:07.554Z --> sent request | {"jsonrpc":"2.0","id":149,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:07.602Z --> received didChange | language: markdown | contentVersion: 765 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:07.721Z --> received didChange | language: markdown | contentVersion: 766 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:07.822Z --> received didChange | language: markdown | contentVersion: 767 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:07.827Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":277,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":150} + +APP 2025-04-04T19:35:08.028Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has got \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":767} + +APP 2025-04-04T19:35:08.029Z --> calling completion event + +APP 2025-04-04T19:35:08.029Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:08.029Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:08.029Z --> copilot | completion request + +APP 2025-04-04T19:35:08.029Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:08.172Z --> received didChange | language: markdown | contentVersion: 768 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:08.232Z --> received didChange | language: markdown | contentVersion: 769 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:08.398Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:35:08.399Z --> completion hints: | 4 CPUs, and I want to see how it performs with one CPU only. The `-s` option shows the system information. + +APP 2025-04-04T19:35:08.399Z --> sent request | {"jsonrpc":"2.0","id":150,"result":{"isIncomplete":false,"items":[{"label":"4 CPUs, and I want to see how it performs with one CPU only. The `-s` option shows the system information.","kind":1,"preselect":true,"detail":"4 CPUs, and I want to see how it performs with one CPU only. The `-s` option shows the system information.","insertText":"4 CPUs, and I want to see how it performs with one CPU only. The `-s` option shows the system information.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":383},"end":{"line":368,"character":383}}}]}]}} + +APP 2025-04-04T19:35:08.399Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:08.545Z --> received didChange | language: markdown | contentVersion: 770 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:08.722Z --> received didChange | language: markdown | contentVersion: 771 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:08.792Z --> received didChange | language: markdown | contentVersion: 772 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:08.903Z --> received didChange | language: markdown | contentVersion: 773 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:08.909Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":283,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":151} + +APP 2025-04-04T19:35:09.111Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has got fewer \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":773} + +APP 2025-04-04T19:35:09.111Z --> calling completion event + +APP 2025-04-04T19:35:09.111Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:09.111Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:09.111Z --> copilot | completion request + +APP 2025-04-04T19:35:09.112Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:09.220Z --> received didChange | language: markdown | contentVersion: 774 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:09.355Z --> received didChange | language: markdown | contentVersion: 775 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:09.422Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:35:09.422Z --> completion hints: | cores than the host. The `-s` option shows the system information. + +APP 2025-04-04T19:35:09.423Z --> sent request | {"jsonrpc":"2.0","id":151,"result":{"isIncomplete":false,"items":[{"label":"cores than the host. The `-s` option shows the system information.","kind":1,"preselect":true,"detail":"cores than the host. The `-s` option shows the system information.","insertText":"cores than the host. The `-s` option shows the system information.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":349},"end":{"line":368,"character":349}}}]}]}} + +APP 2025-04-04T19:35:09.423Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:09.603Z --> received didChange | language: markdown | contentVersion: 776 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:09.854Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":286,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":152} + +APP 2025-04-04T19:35:09.988Z --> received didChange | language: markdown | contentVersion: 777 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:10.056Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has got fewer CPUs\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":777} + +APP 2025-04-04T19:35:10.056Z --> skipping because content is stale + +APP 2025-04-04T19:35:10.056Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:10.056Z --> sent request | {"jsonrpc":"2.0","id":152,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:10.092Z --> received didChange | language: markdown | contentVersion: 778 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:10.098Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":288,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":153} + +APP 2025-04-04T19:35:10.218Z --> received didChange | language: markdown | contentVersion: 779 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:10.298Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has got fewer CPUs t\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":779} + +APP 2025-04-04T19:35:10.298Z --> skipping because content is stale + +APP 2025-04-04T19:35:10.299Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:10.299Z --> sent request | {"jsonrpc":"2.0","id":153,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:10.325Z --> received didChange | language: markdown | contentVersion: 780 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:10.411Z --> received didChange | language: markdown | contentVersion: 781 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:10.561Z --> received didChange | language: markdown | contentVersion: 782 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:10.640Z --> received didChange | language: markdown | contentVersion: 783 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:10.646Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":293,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":154} + +APP 2025-04-04T19:35:10.749Z --> received didChange | language: markdown | contentVersion: 784 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:10.838Z --> received didChange | language: markdown | contentVersion: 785 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:10.847Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has got fewer CPUs than th\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":785} + +APP 2025-04-04T19:35:10.847Z --> skipping because content is stale + +APP 2025-04-04T19:35:10.847Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:10.847Z --> sent request | {"jsonrpc":"2.0","id":154,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:10.908Z --> received didChange | language: markdown | contentVersion: 786 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:11.058Z --> received didChange | language: markdown | contentVersion: 787 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:11.063Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":297,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":155} + +APP 2025-04-04T19:35:11.175Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:11.265Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has got fewer CPUs than the \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":787} + +APP 2025-04-04T19:35:11.266Z --> calling completion event + +APP 2025-04-04T19:35:11.266Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:11.266Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:11.266Z --> copilot | completion request + +APP 2025-04-04T19:35:11.266Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:11.276Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:11.494Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:35:11.495Z --> completion hints: | host. + +APP 2025-04-04T19:35:11.495Z --> sent request | {"jsonrpc":"2.0","id":155,"result":{"isIncomplete":false,"items":[{"label":"host.","kind":1,"preselect":true,"detail":"host.","insertText":"host.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":302},"end":{"line":368,"character":302}}}]}]}} + +APP 2025-04-04T19:35:11.495Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:11.693Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:11.962Z --> received didChange | language: markdown | contentVersion: 788 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:12.161Z --> received didChange | language: markdown | contentVersion: 789 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:12.245Z --> received didChange | language: markdown | contentVersion: 790 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:12.497Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":300,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":156} + +APP 2025-04-04T19:35:12.699Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has got fewer CPUs than the gue\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":790} + +APP 2025-04-04T19:35:12.699Z --> calling completion event + +APP 2025-04-04T19:35:12.699Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:12.700Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:12.700Z --> copilot | completion request + +APP 2025-04-04T19:35:12.700Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:13.212Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:35:13.212Z --> completion hints: | st OS. The `-s` option shows the system information. + +APP 2025-04-04T19:35:13.213Z --> sent request | {"jsonrpc":"2.0","id":156,"result":{"isIncomplete":false,"items":[{"label":"st OS. The `-s` option shows the system information.","kind":1,"preselect":true,"detail":"st OS. The `-s` option shows the system information.","insertText":"st OS. The `-s` option shows the system information.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":352},"end":{"line":368,"character":352}}}]}]}} + +APP 2025-04-04T19:35:13.213Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:13.317Z --> received didChange | language: markdown | contentVersion: 791 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:13.480Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:13.503Z --> received didChange | language: markdown | contentVersion: 792 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:13.722Z --> received didChange | language: markdown | contentVersion: 793 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:13.941Z --> received didChange | language: markdown | contentVersion: 794 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:14.110Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:14.879Z --> received didChange | language: markdown | contentVersion: 795 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:14.884Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":297,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":157} + +APP 2025-04-04T19:35:15.086Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has got fewer CPUs than the \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":795} + +APP 2025-04-04T19:35:15.086Z --> calling completion event + +APP 2025-04-04T19:35:15.087Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:15.087Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:15.087Z --> copilot | completion request + +APP 2025-04-04T19:35:15.087Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:15.360Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:35:15.361Z --> completion hints: | host. + +APP 2025-04-04T19:35:15.361Z --> sent request | {"jsonrpc":"2.0","id":157,"result":{"isIncomplete":false,"items":[{"label":"host.","kind":1,"preselect":true,"detail":"host.","insertText":"host.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":302},"end":{"line":368,"character":302}}}]}]}} + +APP 2025-04-04T19:35:15.361Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:15.765Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:17.682Z --> received didChange | language: markdown | contentVersion: 796 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:17.857Z --> received didChange | language: markdown | contentVersion: 797 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:17.915Z --> received didChange | language: markdown | contentVersion: 798 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:18.029Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:18.258Z --> received didChange | language: markdown | contentVersion: 799 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:18.262Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":301,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":158} + +APP 2025-04-04T19:35:18.464Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has got fewer CPUs than the hos \n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":799} + +APP 2025-04-04T19:35:18.464Z --> calling completion event + +APP 2025-04-04T19:35:18.464Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:18.464Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:18.464Z --> copilot | completion request + +APP 2025-04-04T19:35:18.465Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:18.615Z --> received didChange | language: markdown | contentVersion: 800 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:18.696Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:35:18.696Z --> completion hints: | t system. + +APP 2025-04-04T19:35:18.696Z --> sent request | {"jsonrpc":"2.0","id":158,"result":{"isIncomplete":false,"items":[{"label":"t system.","kind":1,"preselect":true,"detail":"t system.","insertText":"t system.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":310},"end":{"line":368,"character":310}}}]}]}} + +APP 2025-04-04T19:35:18.696Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:18.740Z --> received didChange | language: markdown | contentVersion: 801 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:18.911Z --> received didChange | language: markdown | contentVersion: 802 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:18.917Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":302,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":159} + +APP 2025-04-04T19:35:19.036Z --> received didChange | language: markdown | contentVersion: 803 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:19.112Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:19.119Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has got fewer CPUs than the host s\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":803} + +APP 2025-04-04T19:35:19.119Z --> skipping because content is stale + +APP 2025-04-04T19:35:19.119Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:19.119Z --> sent request | {"jsonrpc":"2.0","id":159,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:19.168Z --> received didChange | language: markdown | contentVersion: 804 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:19.245Z --> received didChange | language: markdown | contentVersion: 805 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:19.458Z --> received didChange | language: markdown | contentVersion: 806 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:19.518Z --> received didChange | language: markdown | contentVersion: 807 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:19.572Z --> received didChange | language: markdown | contentVersion: 808 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:19.824Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":308,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":160} + +APP 2025-04-04T19:35:19.835Z --> received didChange | language: markdown | contentVersion: 809 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:20.026Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, i limit it to one CPU, as the VM has got fewer CPUs than the host system.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":809} + +APP 2025-04-04T19:35:20.026Z --> skipping because content is stale + +APP 2025-04-04T19:35:20.026Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:20.026Z --> sent request | {"jsonrpc":"2.0","id":160,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:21.267Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:22.701Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:24.436Z --> received didChange | language: markdown | contentVersion: 810 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:24.560Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":236,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":161} + +APP 2025-04-04T19:35:24.663Z --> received didChange | language: markdown | contentVersion: 811 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:24.761Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, wlimit it to one CPU, as the VM has got fewer CPUs than the host system.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":811} + +APP 2025-04-04T19:35:24.761Z --> skipping because content is stale + +APP 2025-04-04T19:35:24.762Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:24.762Z --> sent request | {"jsonrpc":"2.0","id":161,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:35:24.832Z --> received didChange | language: markdown | contentVersion: 812 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:24.997Z --> received didChange | language: markdown | contentVersion: 813 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:35:25.002Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":239,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":162} + +APP 2025-04-04T19:35:25.088Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:25.204Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU, as the VM has got fewer CPUs than the host system.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":813} + +APP 2025-04-04T19:35:25.205Z --> calling completion event + +APP 2025-04-04T19:35:25.205Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:35:25.205Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:35:25.205Z --> copilot | completion request + +APP 2025-04-04T19:35:25.205Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:35:25.601Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:35:25.602Z --> completion hints: | are only interested in the CPU performance. + +APP 2025-04-04T19:35:25.602Z --> sent request | {"jsonrpc":"2.0","id":162,"result":{"isIncomplete":false,"items":[{"label":"are only interested in the CPU performance.","kind":1,"preselect":true,"detail":"are only interested in the CPU performance.","insertText":"are only interested in the CPU performance.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":282},"end":{"line":368,"character":353}}}]}]}} + +APP 2025-04-04T19:35:25.602Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:28.465Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:35:35.206Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:44:34.501Z --> received didChange | language: markdown | contentVersion: 814 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:44:34.672Z --> received didChange | language: markdown | contentVersion: 815 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:44:35.183Z --> received didChange | language: markdown | contentVersion: 816 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:44:38.982Z --> received didChange | language: markdown | contentVersion: 817 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:44:41.908Z --> received didChange | language: markdown | contentVersion: 818 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:02.679Z --> received didChange | language: markdown | contentVersion: 819 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:02.684Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":259,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":163} + +APP 2025-04-04T19:45:02.885Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU , as the VM has got fewer CPUs than the host system.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":819} + +APP 2025-04-04T19:45:02.886Z --> calling completion event + +APP 2025-04-04T19:45:02.886Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:02.886Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:02.886Z --> copilot | completion request + +APP 2025-04-04T19:45:02.887Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:03.297Z --> received didChange | language: markdown | contentVersion: 820 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:03.397Z --> received didChange | language: markdown | contentVersion: 821 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:03.500Z --> received didChange | language: markdown | contentVersion: 822 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:03.597Z --> received didChange | language: markdown | contentVersion: 823 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:03.604Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":263,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":164} + +APP 2025-04-04T19:45:03.656Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:45:03.656Z --> completion hints: | core and 1GB of memory. The results are in `ops/sec` (operations per second). + +APP 2025-04-04T19:45:03.657Z --> sent request | {"jsonrpc":"2.0","id":163,"result":{"isIncomplete":false,"items":[{"label":"core and 1GB of memory. The results are in `ops/sec` (operations per second).","kind":1,"preselect":true,"detail":"core and 1GB of memory. The results are in `ops/sec` (operations per second).","insertText":"core and 1GB of memory. The results are in `ops/sec` (operations per second).","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":336},"end":{"line":368,"character":388}}}]}]}} + +APP 2025-04-04T19:45:03.657Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:03.783Z --> received didChange | language: markdown | contentVersion: 824 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:03.807Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for t, as the VM has got fewer CPUs than the host system.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":824} + +APP 2025-04-04T19:45:03.807Z --> skipping because content is stale + +APP 2025-04-04T19:45:03.807Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:03.807Z --> sent request | {"jsonrpc":"2.0","id":164,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:45:03.869Z --> received didChange | language: markdown | contentVersion: 825 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:03.963Z --> received didChange | language: markdown | contentVersion: 826 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:04.056Z --> received didChange | language: markdown | contentVersion: 827 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:04.061Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":267,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":165} + +APP 2025-04-04T19:45:04.131Z --> received didChange | language: markdown | contentVersion: 828 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:04.263Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the f, as the VM has got fewer CPUs than the host system.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":828} + +APP 2025-04-04T19:45:04.263Z --> skipping because content is stale + +APP 2025-04-04T19:45:04.264Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:04.264Z --> sent request | {"jsonrpc":"2.0","id":165,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:45:04.271Z --> received didChange | language: markdown | contentVersion: 829 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:04.351Z --> received didChange | language: markdown | contentVersion: 830 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:04.552Z --> received didChange | language: markdown | contentVersion: 831 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:04.765Z --> received didChange | language: markdown | contentVersion: 832 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:04.893Z --> received didChange | language: markdown | contentVersion: 833 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:04.899Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":273,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":166} + +APP 2025-04-04T19:45:05.100Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first , as the VM has got fewer CPUs than the host system.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":833} + +APP 2025-04-04T19:45:05.101Z --> calling completion event + +APP 2025-04-04T19:45:05.101Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:05.101Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:05.101Z --> copilot | completion request + +APP 2025-04-04T19:45:05.101Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:05.149Z --> received didChange | language: markdown | contentVersion: 834 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:05.208Z --> received didChange | language: markdown | contentVersion: 835 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:05.514Z --> received didChange | language: markdown | contentVersion: 836 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:05.686Z --> received didChange | language: markdown | contentVersion: 837 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:05.703Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:45:05.704Z --> completion hints: | test and one memory test. The `-s` option is used to run the tests in a single thread, which is useful for benchmarking CPU performance in a more controlled manner. + +APP 2025-04-04T19:45:05.705Z --> sent request | {"jsonrpc":"2.0","id":166,"result":{"isIncomplete":false,"items":[{"label":"test and one memory test. The `-s` option is used to run the tests in a single thread, which is useful for benchmarking CPU performance in a more controlled manner.","kind":1,"preselect":true,"detail":"test and one memory test. The `-s` option is used to run the tests in a single thread, which is useful for benchmarking CPU performance in a more controlled manner.","insertText":"test and one memory test. The `-s` option is used to run the tests in a single thread, which is useful for benchmarking CPU performance in a more controlled manner.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":437},"end":{"line":368,"character":489}}}]}]}} + +APP 2025-04-04T19:45:05.705Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:05.817Z --> received didChange | language: markdown | contentVersion: 838 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:05.924Z --> received didChange | language: markdown | contentVersion: 839 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:05.950Z --> received didChange | language: markdown | contentVersion: 840 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:06.067Z --> received didChange | language: markdown | contentVersion: 841 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:06.073Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":277,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":167} + +APP 2025-04-04T19:45:06.121Z --> received didChange | language: markdown | contentVersion: 842 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:06.240Z --> received didChange | language: markdown | contentVersion: 843 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:06.275Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run wi, as the VM has got fewer CPUs than the host system.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":843} + +APP 2025-04-04T19:45:06.275Z --> skipping because content is stale + +APP 2025-04-04T19:45:06.275Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:06.275Z --> sent request | {"jsonrpc":"2.0","id":167,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:45:06.368Z --> received didChange | language: markdown | contentVersion: 844 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:06.474Z --> received didChange | language: markdown | contentVersion: 845 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:06.584Z --> received didChange | language: markdown | contentVersion: 846 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:06.589Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":282,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":168} + +APP 2025-04-04T19:45:06.791Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with , as the VM has got fewer CPUs than the host system.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":846} + +APP 2025-04-04T19:45:06.791Z --> calling completion event + +APP 2025-04-04T19:45:06.791Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:06.791Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:06.791Z --> copilot | completion request + +APP 2025-04-04T19:45:06.792Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:07.014Z --> received didChange | language: markdown | contentVersion: 847 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:07.194Z --> received didChange | language: markdown | contentVersion: 848 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:07.341Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:45:07.342Z --> completion hints: | `-s 1` and then run it with all CPUs available on the system. The `-t` option allows you to set the test time in seconds. The default is 10 seconds, but I set it to 1 second for the first run. + +APP 2025-04-04T19:45:07.342Z --> sent request | {"jsonrpc":"2.0","id":168,"result":{"isIncomplete":false,"items":[{"label":"`-s 1` and then run it with all CPUs available on the system. The `-t` option allows you to set the test time in seconds. The default is 10 seconds, but I set it to 1 second for the first run.","kind":1,"preselect":true,"detail":"`-s 1` and then run it with all CPUs available on the system. The `-t` option allows you to set the test time in seconds. The default is 10 seconds, but I set it to 1 second for the first run.","insertText":"`-s 1` and then run it with all CPUs available on the system. The `-t` option allows you to set the test time in seconds. The default is 10 seconds, but I set it to 1 second for the first run.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":474},"end":{"line":368,"character":526}}}]}]}} + +APP 2025-04-04T19:45:07.342Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:07.374Z --> received didChange | language: markdown | contentVersion: 849 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:07.547Z --> received didChange | language: markdown | contentVersion: 850 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:07.718Z --> received didChange | language: markdown | contentVersion: 851 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:07.829Z --> received didChange | language: markdown | contentVersion: 852 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:07.935Z --> received didChange | language: markdown | contentVersion: 853 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:08.071Z --> received didChange | language: markdown | contentVersion: 854 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:08.158Z --> received didChange | language: markdown | contentVersion: 855 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:08.248Z --> received didChange | language: markdown | contentVersion: 856 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:08.254Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":282,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":169} + +APP 2025-04-04T19:45:08.456Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with , as the VM has got fewer CPUs than the host system.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":856} + +APP 2025-04-04T19:45:08.456Z --> calling completion event + +APP 2025-04-04T19:45:08.456Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:08.456Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:08.456Z --> copilot | completion request + +APP 2025-04-04T19:45:08.457Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:08.851Z --> received didChange | language: markdown | contentVersion: 857 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:09.184Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:45:09.185Z --> completion hints: | `-s 1` and then run it with all CPUs available on the system. The `-t` option allows you to set the test time in seconds. The default is 10 seconds, but I set it to 1 second for the first run. + +APP 2025-04-04T19:45:09.185Z --> sent request | {"jsonrpc":"2.0","id":169,"result":{"isIncomplete":false,"items":[{"label":"`-s 1` and then run it with all CPUs available on the system. The `-t` option allows you to set the test time in seconds. The default is 10 seconds, but I set it to 1 second for the first run.","kind":1,"preselect":true,"detail":"`-s 1` and then run it with all CPUs available on the system. The `-t` option allows you to set the test time in seconds. The default is 10 seconds, but I set it to 1 second for the first run.","insertText":"`-s 1` and then run it with all CPUs available on the system. The `-t` option allows you to set the test time in seconds. The default is 10 seconds, but I set it to 1 second for the first run.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":474},"end":{"line":368,"character":526}}}]}]}} + +APP 2025-04-04T19:45:09.185Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:10.541Z --> received didChange | language: markdown | contentVersion: 858 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:11.378Z --> received didChange | language: markdown | contentVersion: 859 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:12.887Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:14.707Z --> received didChange | language: markdown | contentVersion: 860 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:14.713Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":338,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":170} + +APP 2025-04-04T19:45:14.804Z --> received didChange | language: markdown | contentVersion: 861 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:14.914Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system a.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":861} + +APP 2025-04-04T19:45:14.914Z --> skipping because content is stale + +APP 2025-04-04T19:45:14.914Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:14.914Z --> sent request | {"jsonrpc":"2.0","id":170,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:45:14.933Z --> received didChange | language: markdown | contentVersion: 862 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:15.101Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:15.147Z --> received didChange | language: markdown | contentVersion: 863 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:15.154Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":341,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":171} + +APP 2025-04-04T19:45:15.355Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system an .\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":863} + +APP 2025-04-04T19:45:15.356Z --> calling completion event + +APP 2025-04-04T19:45:15.356Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:15.356Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:15.356Z --> copilot | completion request + +APP 2025-04-04T19:45:15.356Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:15.509Z --> received didChange | language: markdown | contentVersion: 864 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:15.562Z --> received didChange | language: markdown | contentVersion: 865 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:15.657Z --> received didChange | language: markdown | contentVersion: 866 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:15.663Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":342,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":172} + +APP 2025-04-04T19:45:15.770Z --> received didChange | language: markdown | contentVersion: 867 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:15.865Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and w.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":867} + +APP 2025-04-04T19:45:15.865Z --> skipping because content is stale + +APP 2025-04-04T19:45:15.865Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:15.866Z --> sent request | {"jsonrpc":"2.0","id":172,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:45:15.938Z --> received didChange | language: markdown | contentVersion: 868 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:15.945Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:45:15.945Z --> completion hints: | d we want to compare apples with apples. The `-t` option is used to set the test time to 1 second. + +APP 2025-04-04T19:45:15.945Z --> sent request | {"jsonrpc":"2.0","id":171,"result":{"isIncomplete":false,"items":[{"label":"d we want to compare apples with apples. The `-t` option is used to set the test time to 1 second.","kind":1,"preselect":true,"detail":"d we want to compare apples with apples. The `-t` option is used to set the test time to 1 second.","insertText":"d we want to compare apples with apples. The `-t` option is used to set the test time to 1 second.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":439},"end":{"line":368,"character":440}}}]}]}} + +APP 2025-04-04T19:45:15.945Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:16.068Z --> received didChange | language: markdown | contentVersion: 869 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:16.074Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":345,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":173} + +APP 2025-04-04T19:45:16.276Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we .\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":869} + +APP 2025-04-04T19:45:16.277Z --> calling completion event + +APP 2025-04-04T19:45:16.277Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:16.277Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:16.277Z --> copilot | completion request + +APP 2025-04-04T19:45:16.277Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:16.405Z --> received didChange | language: markdown | contentVersion: 870 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:16.481Z --> received didChange | language: markdown | contentVersion: 871 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:16.558Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:45:16.559Z --> completion hints: | want to compare apples with apples. + +APP 2025-04-04T19:45:16.559Z --> sent request | {"jsonrpc":"2.0","id":173,"result":{"isIncomplete":false,"items":[{"label":"want to compare apples with apples.","kind":1,"preselect":true,"detail":"want to compare apples with apples.","insertText":"want to compare apples with apples.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":380},"end":{"line":368,"character":381}}}]}]}} + +APP 2025-04-04T19:45:16.559Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:16.619Z --> received didChange | language: markdown | contentVersion: 872 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:16.792Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:16.838Z --> received didChange | language: markdown | contentVersion: 873 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:16.844Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":349,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":174} + +APP 2025-04-04T19:45:17.046Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we let .\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":873} + +APP 2025-04-04T19:45:17.047Z --> calling completion event + +APP 2025-04-04T19:45:17.047Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:17.047Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:17.047Z --> copilot | completion request + +APP 2025-04-04T19:45:17.047Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:17.479Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:45:17.480Z --> completion hints: | it run for 1 second with `-t 1`. The results are in the `Ubench Single AVG` line. + +APP 2025-04-04T19:45:17.480Z --> sent request | {"jsonrpc":"2.0","id":174,"result":{"isIncomplete":false,"items":[{"label":"it run for 1 second with `-t 1`. The results are in the `Ubench Single AVG` line.","kind":1,"preselect":true,"detail":"it run for 1 second with `-t 1`. The results are in the `Ubench Single AVG` line.","insertText":"it run for 1 second with `-t 1`. The results are in the `Ubench Single AVG` line.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":430},"end":{"line":368,"character":431}}}]}]}} + +APP 2025-04-04T19:45:17.480Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:17.778Z --> received didChange | language: markdown | contentVersion: 874 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:18.457Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:18.594Z --> received didChange | language: markdown | contentVersion: 875 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:18.706Z --> received didChange | language: markdown | contentVersion: 876 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:18.712Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":352,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":175} + +APP 2025-04-04T19:45:18.775Z --> received didChange | language: markdown | contentVersion: 877 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:18.910Z --> received didChange | language: markdown | contentVersion: 878 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:18.913Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we let it ru.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":878} + +APP 2025-04-04T19:45:18.913Z --> skipping because content is stale + +APP 2025-04-04T19:45:18.913Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:18.913Z --> sent request | {"jsonrpc":"2.0","id":175,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:45:18.987Z --> received didChange | language: markdown | contentVersion: 879 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:19.066Z --> received didChange | language: markdown | contentVersion: 880 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:19.072Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":356,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":176} + +APP 2025-04-04T19:45:19.274Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we let it run .\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":880} + +APP 2025-04-04T19:45:19.274Z --> calling completion event + +APP 2025-04-04T19:45:19.274Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:19.274Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:19.274Z --> copilot | completion request + +APP 2025-04-04T19:45:19.275Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:19.707Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:45:19.708Z --> completion hints: | for 1 second with `-s 1`. The results are in `Ubench Single CPU` and `Ubench Single MEM`: + +APP 2025-04-04T19:45:19.708Z --> sent request | {"jsonrpc":"2.0","id":176,"result":{"isIncomplete":false,"items":[{"label":"for 1 second with `-s 1`. The results are in `Ubench Single CPU` and `Ubench Single MEM`:","kind":1,"preselect":true,"detail":"for 1 second with `-s 1`. The results are in `Ubench Single CPU` and `Ubench Single MEM`:","insertText":"for 1 second with `-s 1`. The results are in `Ubench Single CPU` and `Ubench Single MEM`:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":445},"end":{"line":368,"character":446}}}]}]}} + +APP 2025-04-04T19:45:19.708Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:19.896Z --> received didChange | language: markdown | contentVersion: 881 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:20.272Z --> received didChange | language: markdown | contentVersion: 882 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:20.331Z --> received didChange | language: markdown | contentVersion: 883 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:20.477Z --> received didChange | language: markdown | contentVersion: 884 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:20.578Z --> received didChange | language: markdown | contentVersion: 885 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:20.672Z --> received didChange | language: markdown | contentVersion: 886 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:20.760Z --> received didChange | language: markdown | contentVersion: 887 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:20.765Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":361,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":177} + +APP 2025-04-04T19:45:20.967Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we let it run with .\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":887} + +APP 2025-04-04T19:45:20.968Z --> calling completion event + +APP 2025-04-04T19:45:20.968Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:20.968Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:20.968Z --> copilot | completion request + +APP 2025-04-04T19:45:20.968Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:21.370Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:45:21.371Z --> completion hints: | `-s 1` to limit the CPU usage to one core. The `-t` option is used to set the test time in seconds. + +APP 2025-04-04T19:45:21.371Z --> sent request | {"jsonrpc":"2.0","id":177,"result":{"isIncomplete":false,"items":[{"label":"`-s 1` to limit the CPU usage to one core. The `-t` option is used to set the test time in seconds.","kind":1,"preselect":true,"detail":"`-s 1` to limit the CPU usage to one core. The `-t` option is used to set the test time in seconds.","insertText":"`-s 1` to limit the CPU usage to one core. The `-t` option is used to set the test time in seconds.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":460},"end":{"line":368,"character":461}}}]}]}} + +APP 2025-04-04T19:45:21.371Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:21.476Z --> received didChange | language: markdown | contentVersion: 888 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:21.569Z --> received didChange | language: markdown | contentVersion: 889 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:21.819Z --> received didChange | language: markdown | contentVersion: 890 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:21.971Z --> received didChange | language: markdown | contentVersion: 891 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:22.044Z --> received didChange | language: markdown | contentVersion: 892 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:22.050Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":366,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":178} + +APP 2025-04-04T19:45:22.116Z --> received didChange | language: markdown | contentVersion: 893 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:22.242Z --> received didChange | language: markdown | contentVersion: 894 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:22.251Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we let it run with full sp.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":894} + +APP 2025-04-04T19:45:22.251Z --> skipping because content is stale + +APP 2025-04-04T19:45:22.251Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:22.251Z --> sent request | {"jsonrpc":"2.0","id":178,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:45:22.386Z --> received didChange | language: markdown | contentVersion: 895 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:22.493Z --> received didChange | language: markdown | contentVersion: 896 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:22.580Z --> received didChange | language: markdown | contentVersion: 897 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:22.688Z --> received didChange | language: markdown | contentVersion: 898 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:22.693Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":372,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":179} + +APP 2025-04-04T19:45:22.813Z --> received didChange | language: markdown | contentVersion: 899 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:22.895Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we let it run with full speed i.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":899} + +APP 2025-04-04T19:45:22.895Z --> skipping because content is stale + +APP 2025-04-04T19:45:22.895Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:22.895Z --> sent request | {"jsonrpc":"2.0","id":179,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:45:22.905Z --> received didChange | language: markdown | contentVersion: 900 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:22.919Z --> received didChange | language: markdown | contentVersion: 901 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:22.925Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":375,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":180} + +APP 2025-04-04T19:45:23.017Z --> received didChange | language: markdown | contentVersion: 902 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:23.127Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we let it run with full speed in t.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":902} + +APP 2025-04-04T19:45:23.128Z --> skipping because content is stale + +APP 2025-04-04T19:45:23.128Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:23.128Z --> sent request | {"jsonrpc":"2.0","id":180,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:45:23.169Z --> received didChange | language: markdown | contentVersion: 903 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:23.221Z --> received didChange | language: markdown | contentVersion: 904 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:23.360Z --> received didChange | language: markdown | contentVersion: 905 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:23.365Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":379,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":181} + +APP 2025-04-04T19:45:23.566Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we let it run with full speed in the .\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":905} + +APP 2025-04-04T19:45:23.567Z --> calling completion event + +APP 2025-04-04T19:45:23.567Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:23.567Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:23.567Z --> copilot | completion request + +APP 2025-04-04T19:45:23.567Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:23.771Z --> received didChange | language: markdown | contentVersion: 906 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:23.792Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:45:23.792Z --> completion hints: | background. + +APP 2025-04-04T19:45:23.792Z --> sent request | {"jsonrpc":"2.0","id":181,"result":{"isIncomplete":false,"items":[{"label":"background.","kind":1,"preselect":true,"detail":"background.","insertText":"background.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":390},"end":{"line":368,"character":391}}}]}]}} + +APP 2025-04-04T19:45:23.792Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:24.522Z --> received didChange | language: markdown | contentVersion: 907 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:24.615Z --> received didChange | language: markdown | contentVersion: 908 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:24.803Z --> received didChange | language: markdown | contentVersion: 909 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:24.915Z --> received didChange | language: markdown | contentVersion: 910 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:25.167Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":382,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":182} + +APP 2025-04-04T19:45:25.357Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:25.367Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we let it run with full speed in the 2nd.\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":910} + +APP 2025-04-04T19:45:25.368Z --> calling completion event + +APP 2025-04-04T19:45:25.368Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:25.368Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:25.368Z --> copilot | completion request + +APP 2025-04-04T19:45:25.368Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:25.392Z --> received didChange | language: markdown | contentVersion: 911 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:25.398Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":383,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":183} + +APP 2025-04-04T19:45:25.598Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we let it run with full speed in the 2nd .\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":911} + +APP 2025-04-04T19:45:25.599Z --> calling completion event + +APP 2025-04-04T19:45:25.599Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:25.599Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:25.599Z --> copilot | completion request + +APP 2025-04-04T19:45:25.599Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:45:25.600Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:45:25.600Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:45:25.611Z --> received didChange | language: markdown | contentVersion: 912 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:25.809Z --> received didChange | language: markdown | contentVersion: 913 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:25.911Z --> received didChange | language: markdown | contentVersion: 914 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:25.972Z --> received didChange | language: markdown | contentVersion: 915 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:26.080Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:45:26.081Z --> completion hints: | run. + +APP 2025-04-04T19:45:26.081Z --> sent request | {"jsonrpc":"2.0","id":183,"result":{"isIncomplete":false,"items":[{"label":"run.","kind":1,"preselect":true,"detail":"run.","insertText":"run.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":387},"end":{"line":368,"character":388}}}]}]}} + +APP 2025-04-04T19:45:26.081Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:26.278Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:26.555Z --> received didChange | language: markdown | contentVersion: 916 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:26.759Z --> received didChange | language: markdown | contentVersion: 917 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:26.957Z --> received didChange | language: markdown | contentVersion: 918 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:27.048Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:27.130Z --> received didChange | language: markdown | contentVersion: 919 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:27.294Z --> received didChange | language: markdown | contentVersion: 920 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:27.424Z --> received didChange | language: markdown | contentVersion: 921 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:27.489Z --> received didChange | language: markdown | contentVersion: 922 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:27.740Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":386,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":184} + +APP 2025-04-04T19:45:27.755Z --> received didChange | language: markdown | contentVersion: 923 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:45:27.942Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated, benchmark using `ubench`, the Unix Benchmark Utility, available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU as well as memory performance. Here, we limit it to one CPU for the first run with `-s`, as the VM has got fewer CPUs than the host system and we let it run with full speed in the 2nd run..\n\n### FreeBSD host\n\n```sh\npaul@f0:~ % ubench -h\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nUsage: ubench [-cmhs] [-t testtime]\n\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":923} + +APP 2025-04-04T19:45:27.942Z --> skipping because content is stale + +APP 2025-04-04T19:45:27.943Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:27.943Z --> sent request | {"jsonrpc":"2.0","id":184,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:45:29.276Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:30.970Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:33.568Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:35.369Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:35.599Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:45:35.600Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:46:23.984Z --> received didChange | language: markdown | contentVersion: 924 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:06.890Z --> received didChange | language: markdown | contentVersion: 925 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:08.005Z --> received didChange | language: markdown | contentVersion: 926 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:16.029Z --> received didChange | language: markdown | contentVersion: 927 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:21.489Z --> received didChange | language: markdown | contentVersion: 928 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:21.666Z --> received didChange | language: markdown | contentVersion: 929 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:21.863Z --> received didChange | language: markdown | contentVersion: 930 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:22.015Z --> received didChange | language: markdown | contentVersion: 931 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:22.101Z --> received didChange | language: markdown | contentVersion: 932 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:22.129Z --> received didChange | language: markdown | contentVersion: 933 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:22.318Z --> received didChange | language: markdown | contentVersion: 934 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:22.363Z --> received didChange | language: markdown | contentVersion: 935 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:22.497Z --> received didChange | language: markdown | contentVersion: 936 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:22.502Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":7,"line":372},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":185} + +APP 2025-04-04T19:47:22.686Z --> received didChange | language: markdown | contentVersion: 937 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:22.704Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle C\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":937} + +APP 2025-04-04T19:47:22.704Z --> skipping because content is stale + +APP 2025-04-04T19:47:22.704Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:22.704Z --> sent request | {"jsonrpc":"2.0","id":185,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:47:22.727Z --> received didChange | language: markdown | contentVersion: 938 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:22.969Z --> received didChange | language: markdown | contentVersion: 939 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:23.220Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":10,"line":372},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":186} + +APP 2025-04-04T19:47:23.420Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle CPU\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":939} + +APP 2025-04-04T19:47:23.421Z --> calling completion event + +APP 2025-04-04T19:47:23.421Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":372,"character":0},"end":{"line":373,"character":0}}}] + +APP 2025-04-04T19:47:23.421Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":372,"character":0},"end":{"line":373,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:47:23.421Z --> copilot | completion request + +APP 2025-04-04T19:47:23.421Z --> fetch | /copilot_internal/v2/token + +APP 2025-04-04T19:47:23.841Z --> response | https://api.github.com/copilot_internal/v2/token | 200 + +APP 2025-04-04T19:47:23.842Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:47:24.020Z --> received didChange | language: markdown | contentVersion: 940 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:24.253Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:47:24.253Z --> completion hints: | +```sh + +APP 2025-04-04T19:47:24.254Z --> sent request | {"jsonrpc":"2.0","id":186,"result":{"isIncomplete":false,"items":[{"label":"```sh","kind":1,"preselect":true,"detail":"```sh","insertText":"```sh","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":372,"character":15},"end":{"line":372,"character":15}}}]}]}} + +APP 2025-04-04T19:47:24.254Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:27.663Z --> received didChange | language: markdown | contentVersion: 941 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:27.819Z --> received didChange | language: markdown | contentVersion: 942 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:27.944Z --> received didChange | language: markdown | contentVersion: 943 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:28.104Z --> received didChange | language: markdown | contentVersion: 944 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:28.308Z --> received didChange | language: markdown | contentVersion: 945 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:28.338Z --> received didChange | language: markdown | contentVersion: 946 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:28.344Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":388},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":187} + +APP 2025-04-04T19:47:28.506Z --> received didChange | language: markdown | contentVersion: 947 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:28.544Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll C\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":947} + +APP 2025-04-04T19:47:28.544Z --> skipping because content is stale + +APP 2025-04-04T19:47:28.544Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:28.544Z --> sent request | {"jsonrpc":"2.0","id":187,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:47:28.594Z --> received didChange | language: markdown | contentVersion: 948 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:28.811Z --> received didChange | language: markdown | contentVersion: 949 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:29.044Z --> received didChange | language: markdown | contentVersion: 950 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:32.483Z --> received didChange | language: markdown | contentVersion: 951 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:32.488Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":9,"line":388},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":188} + +APP 2025-04-04T19:47:32.690Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs \n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":951} + +APP 2025-04-04T19:47:32.690Z --> calling completion event + +APP 2025-04-04T19:47:32.690Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":388,"character":0},"end":{"line":389,"character":0}}}] + +APP 2025-04-04T19:47:32.691Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":388,"character":0},"end":{"line":389,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:47:32.691Z --> copilot | completion request + +APP 2025-04-04T19:47:32.691Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:47:33.063Z --> received didChange | language: markdown | contentVersion: 952 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:33.068Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":"(","triggerKind":2},"position":{"character":10,"line":388},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":189} + +APP 2025-04-04T19:47:33.163Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:47:33.164Z --> completion hints: | : + +APP 2025-04-04T19:47:33.164Z --> sent request | {"jsonrpc":"2.0","id":188,"result":{"isIncomplete":false,"items":[{"label":":","kind":1,"preselect":true,"detail":":","insertText":":","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":388,"character":10},"end":{"line":388,"character":10}}}]}]}} + +APP 2025-04-04T19:47:33.164Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:33.271Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs ()\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":952} + +APP 2025-04-04T19:47:33.271Z --> calling completion event + +APP 2025-04-04T19:47:33.271Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":388,"character":0},"end":{"line":389,"character":0}}}] + +APP 2025-04-04T19:47:33.271Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":388,"character":0},"end":{"line":389,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:47:33.271Z --> copilot | completion request + +APP 2025-04-04T19:47:33.272Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:47:33.324Z --> received didChange | language: markdown | contentVersion: 953 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:33.421Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:33.422Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":388,"character":0},"end":{"line":389,"character":0}}}] + +APP 2025-04-04T19:47:33.422Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":388,"character":0},"end":{"line":389,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:47:33.447Z --> received didChange | language: markdown | contentVersion: 954 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:33.588Z --> received didChange | language: markdown | contentVersion: 955 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:33.682Z --> received didChange | language: markdown | contentVersion: 956 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:33.754Z --> received didChange | language: markdown | contentVersion: 957 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:33.760Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":15,"line":388},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":190} + +APP 2025-04-04T19:47:33.850Z --> received didChange | language: markdown | contentVersion: 958 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:33.961Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with a)\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":958} + +APP 2025-04-04T19:47:33.962Z --> skipping because content is stale + +APP 2025-04-04T19:47:33.962Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:33.962Z --> sent request | {"jsonrpc":"2.0","id":190,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:47:33.972Z --> received didChange | language: markdown | contentVersion: 959 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:34.143Z --> received didChange | language: markdown | contentVersion: 960 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:34.161Z --> received didChange | language: markdown | contentVersion: 961 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:34.168Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":19,"line":388},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":191} + +APP 2025-04-04T19:47:34.369Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all )\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":961} + +APP 2025-04-04T19:47:34.369Z --> calling completion event + +APP 2025-04-04T19:47:34.369Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":388,"character":0},"end":{"line":389,"character":0}}}] + +APP 2025-04-04T19:47:34.369Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":388,"character":0},"end":{"line":389,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:47:34.369Z --> copilot | completion request + +APP 2025-04-04T19:47:34.370Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:47:34.402Z --> received didChange | language: markdown | contentVersion: 962 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:34.620Z --> received didChange | language: markdown | contentVersion: 963 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:34.832Z --> received didChange | language: markdown | contentVersion: 964 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:34.902Z --> received didChange | language: markdown | contentVersion: 965 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:34.902Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:47:34.903Z --> completion hints: | 4 CPUs): + +APP 2025-04-04T19:47:34.903Z --> sent request | {"jsonrpc":"2.0","id":191,"result":{"isIncomplete":false,"items":[{"label":"4 CPUs):","kind":1,"preselect":true,"detail":"4 CPUs):","insertText":"4 CPUs):","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":388,"character":27},"end":{"line":388,"character":28}}}]}]}} + +APP 2025-04-04T19:47:34.903Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:34.960Z --> received didChange | language: markdown | contentVersion: 966 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:35.128Z --> received didChange | language: markdown | contentVersion: 967 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:35.133Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":25,"line":388},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":192} + +APP 2025-04-04T19:47:35.286Z --> received didChange | language: markdown | contentVersion: 968 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:35.335Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve V)\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":968} + +APP 2025-04-04T19:47:35.335Z --> skipping because content is stale + +APP 2025-04-04T19:47:35.336Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:35.336Z --> sent request | {"jsonrpc":"2.0","id":192,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:47:35.457Z --> received didChange | language: markdown | contentVersion: 969 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:35.709Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":27,"line":388},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":193} + +APP 2025-04-04T19:47:35.910Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VM)\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":969} + +APP 2025-04-04T19:47:35.910Z --> calling completion event + +APP 2025-04-04T19:47:35.910Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":388,"character":0},"end":{"line":389,"character":0}}}] + +APP 2025-04-04T19:47:35.910Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":388,"character":0},"end":{"line":389,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:47:35.910Z --> copilot | completion request + +APP 2025-04-04T19:47:35.910Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:47:35.926Z --> received didChange | language: markdown | contentVersion: 970 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:36.063Z --> received didChange | language: markdown | contentVersion: 971 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:36.069Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":29,"line":388},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":194} + +APP 2025-04-04T19:47:36.123Z --> received didChange | language: markdown | contentVersion: 972 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:36.234Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:47:36.234Z --> completion hints: | CPUs): + +APP 2025-04-04T19:47:36.234Z --> sent request | {"jsonrpc":"2.0","id":193,"result":{"isIncomplete":false,"items":[{"label":"CPUs):","kind":1,"preselect":true,"detail":"CPUs):","insertText":"CPUs):","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":388,"character":33},"end":{"line":388,"character":34}}}]}]}} + +APP 2025-04-04T19:47:36.234Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:36.269Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs s)\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":972} + +APP 2025-04-04T19:47:36.269Z --> skipping because content is stale + +APP 2025-04-04T19:47:36.270Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:36.270Z --> sent request | {"jsonrpc":"2.0","id":194,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:47:36.298Z --> received didChange | language: markdown | contentVersion: 973 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:36.354Z --> received didChange | language: markdown | contentVersion: 974 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:36.576Z --> received didChange | language: markdown | contentVersion: 975 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:36.716Z --> received didChange | language: markdown | contentVersion: 976 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:36.760Z --> received didChange | language: markdown | contentVersion: 977 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:36.853Z --> received didChange | language: markdown | contentVersion: 978 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:38.293Z --> received didChange | language: markdown | contentVersion: 979 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:38.888Z --> received didChange | language: markdown | contentVersion: 980 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:39.075Z --> received didChange | language: markdown | contentVersion: 981 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:39.473Z --> received didChange | language: markdown | contentVersion: 982 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:39.590Z --> received didChange | language: markdown | contentVersion: 983 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:39.841Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":390},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":195} + +APP 2025-04-04T19:47:39.880Z --> received didChange | language: markdown | contentVersion: 984 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:40.040Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\nsh`\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":984} + +APP 2025-04-04T19:47:40.041Z --> skipping because content is stale + +APP 2025-04-04T19:47:40.041Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:40.041Z --> sent request | {"jsonrpc":"2.0","id":195,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:47:40.081Z --> received didChange | language: markdown | contentVersion: 985 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:41.305Z --> received didChange | language: markdown | contentVersion: 986 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:41.505Z --> received didChange | language: markdown | contentVersion: 987 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:41.710Z --> received didChange | language: markdown | contentVersion: 988 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:41.897Z --> received didChange | language: markdown | contentVersion: 989 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:42.066Z --> received didChange | language: markdown | contentVersion: 990 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:42.312Z --> received didChange | language: markdown | contentVersion: 991 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:42.624Z --> received didChange | language: markdown | contentVersion: 992 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:42.691Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:42.973Z --> received didChange | language: markdown | contentVersion: 993 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:43.071Z --> received didChange | language: markdown | contentVersion: 994 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:43.273Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:43.322Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":5,"line":390},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":196} + +APP 2025-04-04T19:47:43.423Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:43.522Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh`\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":994} + +APP 2025-04-04T19:47:43.523Z --> calling completion event + +APP 2025-04-04T19:47:43.523Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":390,"character":0},"end":{"line":391,"character":0}}}] + +APP 2025-04-04T19:47:43.523Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":390,"character":0},"end":{"line":391,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:47:43.523Z --> copilot | completion request + +APP 2025-04-04T19:47:43.524Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:47:43.728Z --> received didChange | language: markdown | contentVersion: 995 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:43.811Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:47:43.811Z --> completion hints: | + +APP 2025-04-04T19:47:43.811Z --> sent request | {"jsonrpc":"2.0","id":196,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":390,"character":5},"end":{"line":390,"character":6}}}]}]}} + +APP 2025-04-04T19:47:43.811Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:44.371Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:45.167Z --> received didChange | language: markdown | contentVersion: 996 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:47:45.913Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:47:53.524Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:15.955Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":16,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":197} + +APP 2025-04-04T19:49:16.113Z --> received didChange | language: markdown | contentVersion: 997 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:16.117Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":17,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":198} + +APP 2025-04-04T19:49:16.263Z --> received didChange | language: markdown | contentVersion: 998 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:16.319Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host ``\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":998} + +APP 2025-04-04T19:49:16.319Z --> skipping because content is stale + +APP 2025-04-04T19:49:16.319Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:16.319Z --> sent request | {"jsonrpc":"2.0","id":198,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:49:16.688Z --> received didChange | language: markdown | contentVersion: 999 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:16.951Z --> received didChange | language: markdown | contentVersion: 1000 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:17.204Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":20,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":199} + +APP 2025-04-04T19:49:17.306Z --> received didChange | language: markdown | contentVersion: 1001 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:17.405Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ube`\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1001} + +APP 2025-04-04T19:49:17.405Z --> skipping because content is stale + +APP 2025-04-04T19:49:17.405Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:17.405Z --> sent request | {"jsonrpc":"2.0","id":199,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:49:17.417Z --> received didChange | language: markdown | contentVersion: 1002 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:17.524Z --> received didChange | language: markdown | contentVersion: 1003 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:17.638Z --> received didChange | language: markdown | contentVersion: 1004 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:18.278Z --> received didChange | language: markdown | contentVersion: 1005 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:18.284Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":26,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":200} + +APP 2025-04-04T19:49:18.485Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` \n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1005} + +APP 2025-04-04T19:49:18.485Z --> calling completion event + +APP 2025-04-04T19:49:18.486Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}}}] + +APP 2025-04-04T19:49:18.486Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":370,"character":0},"end":{"line":371,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:49:18.486Z --> copilot | completion request + +APP 2025-04-04T19:49:18.486Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:49:18.535Z --> received didChange | language: markdown | contentVersion: 1006 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:18.675Z --> received didChange | language: markdown | contentVersion: 1007 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:18.775Z --> received didChange | language: markdown | contentVersion: 1008 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:18.838Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:49:18.838Z --> completion hints: | + +APP 2025-04-04T19:49:18.838Z --> sent request | {"jsonrpc":"2.0","id":200,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":370,"character":26},"end":{"line":370,"character":26}}}]}]}} + +APP 2025-04-04T19:49:18.838Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:18.910Z --> received didChange | language: markdown | contentVersion: 1009 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:19.023Z --> received didChange | language: markdown | contentVersion: 1010 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:19.270Z --> received didChange | language: markdown | contentVersion: 1011 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:19.322Z --> received didChange | language: markdown | contentVersion: 1012 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:19.573Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":33,"line":370},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":201} + +APP 2025-04-04T19:49:19.609Z --> received didChange | language: markdown | contentVersion: 1013 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:19.743Z --> received didChange | language: markdown | contentVersion: 1014 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:19.774Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1014} + +APP 2025-04-04T19:49:19.774Z --> skipping because content is stale + +APP 2025-04-04T19:49:19.774Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:19.774Z --> sent request | {"jsonrpc":"2.0","id":201,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:49:28.487Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:28.896Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":20,"line":393},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":202} + +APP 2025-04-04T19:49:29.098Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1014} + +APP 2025-04-04T19:49:29.099Z --> calling completion event + +APP 2025-04-04T19:49:29.099Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":393,"character":0},"end":{"line":394,"character":0}}}] + +APP 2025-04-04T19:49:29.099Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":393,"character":0},"end":{"line":394,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:49:29.099Z --> copilot | completion request + +APP 2025-04-04T19:49:29.099Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:49:29.417Z --> received didChange | language: markdown | contentVersion: 1015 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:29.598Z --> received didChange | language: markdown | contentVersion: 1016 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:29.730Z --> received didChange | language: markdown | contentVersion: 1017 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:29.795Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:49:29.796Z --> completion hints: | VM `ubench` benchmark + +APP 2025-04-04T19:49:29.796Z --> sent request | {"jsonrpc":"2.0","id":202,"result":{"isIncomplete":false,"items":[{"label":"VM `ubench` benchmark","kind":1,"preselect":true,"detail":"VM `ubench` benchmark","insertText":"VM `ubench` benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":393,"character":41},"end":{"line":393,"character":41}}}]}]}} + +APP 2025-04-04T19:49:29.796Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:30.160Z --> received didChange | language: markdown | contentVersion: 1018 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:30.334Z --> received didChange | language: markdown | contentVersion: 1019 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:30.493Z --> received didChange | language: markdown | contentVersion: 1020 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:30.567Z --> received didChange | language: markdown | contentVersion: 1021 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:30.839Z --> received didChange | language: markdown | contentVersion: 1022 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:31.102Z --> received didChange | language: markdown | contentVersion: 1023 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:31.231Z --> received didChange | language: markdown | contentVersion: 1024 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:31.366Z --> received didChange | language: markdown | contentVersion: 1025 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:31.474Z --> received didChange | language: markdown | contentVersion: 1026 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:31.619Z --> received didChange | language: markdown | contentVersion: 1027 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:31.870Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":27,"line":393},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":203} + +APP 2025-04-04T19:49:32.071Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench`\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1027} + +APP 2025-04-04T19:49:32.071Z --> calling completion event + +APP 2025-04-04T19:49:32.071Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":393,"character":0},"end":{"line":394,"character":0}}}] + +APP 2025-04-04T19:49:32.071Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":393,"character":0},"end":{"line":394,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:49:32.071Z --> copilot | completion request + +APP 2025-04-04T19:49:32.072Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:49:32.357Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:49:32.357Z --> completion hints: | ` benchmark + +APP 2025-04-04T19:49:32.357Z --> sent request | {"jsonrpc":"2.0","id":203,"result":{"isIncomplete":false,"items":[{"label":"` benchmark","kind":1,"preselect":true,"detail":"` benchmark","insertText":"` benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":393,"character":38},"end":{"line":393,"character":39}}}]}]}} + +APP 2025-04-04T19:49:32.357Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:33.883Z --> received didChange | language: markdown | contentVersion: 1028 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:33.888Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":29,"line":393},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":204} + +APP 2025-04-04T19:49:34.084Z --> received didChange | language: markdown | contentVersion: 1029 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:34.090Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` b\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1029} + +APP 2025-04-04T19:49:34.090Z --> skipping because content is stale + +APP 2025-04-04T19:49:34.090Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:34.090Z --> sent request | {"jsonrpc":"2.0","id":204,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:49:34.104Z --> received didChange | language: markdown | contentVersion: 1030 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:34.305Z --> received didChange | language: markdown | contentVersion: 1031 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:34.556Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":32,"line":393},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":205} + +APP 2025-04-04T19:49:34.711Z --> received didChange | language: markdown | contentVersion: 1032 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:34.758Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benc\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1032} + +APP 2025-04-04T19:49:34.758Z --> skipping because content is stale + +APP 2025-04-04T19:49:34.758Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:34.758Z --> sent request | {"jsonrpc":"2.0","id":205,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:49:34.781Z --> received didChange | language: markdown | contentVersion: 1033 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:35.024Z --> received didChange | language: markdown | contentVersion: 1034 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:35.075Z --> received didChange | language: markdown | contentVersion: 1035 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:35.232Z --> received didChange | language: markdown | contentVersion: 1036 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:35.331Z --> received didChange | language: markdown | contentVersion: 1037 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:39.100Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:42.072Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:49:42.493Z --> received didChange | language: markdown | contentVersion: 1038 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:43.425Z --> received didChange | language: markdown | contentVersion: 1039 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:47.205Z --> received didChange | language: markdown | contentVersion: 1040 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:49:49.783Z --> received didChange | language: markdown | contentVersion: 1041 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:43.877Z --> received didChange | language: markdown | contentVersion: 1042 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:45.737Z --> received didChange | language: markdown | contentVersion: 1043 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:45.823Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":405},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":206} + +APP 2025-04-04T19:51:46.025Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### @ Bhyve `ubench` benchmark\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1043} + +APP 2025-04-04T19:51:46.026Z --> calling completion event + +APP 2025-04-04T19:51:46.026Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":405,"character":0},"end":{"line":406,"character":0}}}] + +APP 2025-04-04T19:51:46.026Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":405,"character":0},"end":{"line":406,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:51:46.026Z --> copilot | completion request + +APP 2025-04-04T19:51:46.026Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:51:46.040Z --> received didChange | language: markdown | contentVersion: 1044 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:46.125Z --> received didChange | language: markdown | contentVersion: 1045 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:46.344Z --> received didChange | language: markdown | contentVersion: 1046 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:46.413Z --> received didChange | language: markdown | contentVersion: 1047 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:46.512Z --> received didChange | language: markdown | contentVersion: 1048 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:46.602Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:51:46.603Z --> completion hints: | Rocky Linux @ Bhyve `ubench` benchmark + +APP 2025-04-04T19:51:46.603Z --> sent request | {"jsonrpc":"2.0","id":206,"result":{"isIncomplete":false,"items":[{"label":"Rocky Linux @ Bhyve `ubench` benchmark","kind":1,"preselect":true,"detail":"Rocky Linux @ Bhyve `ubench` benchmark","insertText":"Rocky Linux @ Bhyve `ubench` benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":405,"character":42},"end":{"line":405,"character":68}}}]}]}} + +APP 2025-04-04T19:51:46.603Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:51:47.418Z --> received didChange | language: markdown | contentVersion: 1049 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:47.581Z --> received didChange | language: markdown | contentVersion: 1050 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:47.761Z --> received didChange | language: markdown | contentVersion: 1051 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:47.941Z --> received didChange | language: markdown | contentVersion: 1052 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:48.089Z --> received didChange | language: markdown | contentVersion: 1053 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:48.277Z --> received didChange | language: markdown | contentVersion: 1054 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:48.726Z --> received didChange | language: markdown | contentVersion: 1055 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:48.838Z --> received didChange | language: markdown | contentVersion: 1056 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:48.940Z --> received didChange | language: markdown | contentVersion: 1057 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:49.091Z --> received didChange | language: markdown | contentVersion: 1058 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:49.168Z --> received didChange | language: markdown | contentVersion: 1059 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:49.395Z --> received didChange | language: markdown | contentVersion: 1060 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:49.450Z --> received didChange | language: markdown | contentVersion: 1061 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:49.456Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":10,"line":405},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":207} + +APP 2025-04-04T19:51:49.642Z --> received didChange | language: markdown | contentVersion: 1062 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:49.658Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky L@ Bhyve `ubench` benchmark\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1062} + +APP 2025-04-04T19:51:49.658Z --> skipping because content is stale + +APP 2025-04-04T19:51:49.658Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:51:49.658Z --> sent request | {"jsonrpc":"2.0","id":207,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:51:49.705Z --> received didChange | language: markdown | contentVersion: 1063 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:49.957Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":12,"line":405},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":208} + +APP 2025-04-04T19:51:49.965Z --> received didChange | language: markdown | contentVersion: 1064 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:50.027Z --> received didChange | language: markdown | contentVersion: 1065 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:50.152Z --> received didChange | language: markdown | contentVersion: 1066 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:50.157Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux@ Bhyve `ubench` benchmark\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1066} + +APP 2025-04-04T19:51:50.157Z --> skipping because content is stale + +APP 2025-04-04T19:51:50.157Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:51:50.157Z --> sent request | {"jsonrpc":"2.0","id":208,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:51:50.422Z --> received didChange | language: markdown | contentVersion: 1067 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:50.428Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":16,"line":405},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":209} + +APP 2025-04-04T19:51:50.572Z --> received didChange | language: markdown | contentVersion: 1068 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:50.629Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9@ Bhyve `ubench` benchmark\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1068} + +APP 2025-04-04T19:51:50.629Z --> skipping because content is stale + +APP 2025-04-04T19:51:50.629Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:51:50.629Z --> sent request | {"jsonrpc":"2.0","id":209,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:51:50.746Z --> received didChange | language: markdown | contentVersion: 1069 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:50.752Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":18,"line":405},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":210} + +APP 2025-04-04T19:51:50.952Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1069} + +APP 2025-04-04T19:51:50.953Z --> calling completion event + +APP 2025-04-04T19:51:50.953Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":405,"character":0},"end":{"line":406,"character":0}}}] + +APP 2025-04-04T19:51:50.953Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":405,"character":0},"end":{"line":406,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:51:50.953Z --> copilot | completion request + +APP 2025-04-04T19:51:50.953Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:51:51.312Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:51:51.313Z --> completion hints: | VM @ Bhyve `ubench` benchmark + +APP 2025-04-04T19:51:51.313Z --> sent request | {"jsonrpc":"2.0","id":210,"result":{"isIncomplete":false,"items":[{"label":"VM @ Bhyve `ubench` benchmark","kind":1,"preselect":true,"detail":"VM @ Bhyve `ubench` benchmark","insertText":"VM @ Bhyve `ubench` benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":405,"character":47},"end":{"line":405,"character":73}}}]}]}} + +APP 2025-04-04T19:51:51.313Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:51:51.690Z --> received didChange | language: markdown | contentVersion: 1070 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:51.861Z --> received didChange | language: markdown | contentVersion: 1071 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:52.537Z --> received didChange | language: markdown | contentVersion: 1072 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:52.729Z --> received didChange | language: markdown | contentVersion: 1073 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:52.813Z --> received didChange | language: markdown | contentVersion: 1074 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:52.920Z --> received didChange | language: markdown | contentVersion: 1075 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:53.072Z --> received didChange | language: markdown | contentVersion: 1076 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:53.232Z --> received didChange | language: markdown | contentVersion: 1077 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:53.483Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":6,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":211} + +APP 2025-04-04T19:51:53.498Z --> received didChange | language: markdown | contentVersion: 1078 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:53.685Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortn\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1078} + +APP 2025-04-04T19:51:53.685Z --> skipping because content is stale + +APP 2025-04-04T19:51:53.685Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:51:53.685Z --> sent request | {"jsonrpc":"2.0","id":211,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:51:53.795Z --> received didChange | language: markdown | contentVersion: 1079 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:54.041Z --> received didChange | language: markdown | contentVersion: 1080 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:54.216Z --> received didChange | language: markdown | contentVersion: 1081 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:54.273Z --> received didChange | language: markdown | contentVersion: 1082 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:54.443Z --> received didChange | language: markdown | contentVersion: 1083 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:54.502Z --> received didChange | language: markdown | contentVersion: 1084 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:54.550Z --> received didChange | language: markdown | contentVersion: 1085 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:54.738Z --> received didChange | language: markdown | contentVersion: 1086 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:54.982Z --> received didChange | language: markdown | contentVersion: 1087 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:54.989Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":14,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":212} + +APP 2025-04-04T19:51:55.076Z --> received didChange | language: markdown | contentVersion: 1088 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:55.082Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":15,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":213} + +APP 2025-04-04T19:51:55.276Z --> received didChange | language: markdown | contentVersion: 1089 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:55.284Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1089} + +APP 2025-04-04T19:51:55.284Z --> skipping because content is stale + +APP 2025-04-04T19:51:55.284Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:51:55.284Z --> sent request | {"jsonrpc":"2.0","id":213,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:51:55.379Z --> received didChange | language: markdown | contentVersion: 1090 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:55.384Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":17,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":214} + +APP 2025-04-04T19:51:55.490Z --> received didChange | language: markdown | contentVersion: 1091 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:55.572Z --> received didChange | language: markdown | contentVersion: 1092 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:55.586Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I co\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1092} + +APP 2025-04-04T19:51:55.586Z --> skipping because content is stale + +APP 2025-04-04T19:51:55.586Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:51:55.586Z --> sent request | {"jsonrpc":"2.0","id":214,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:51:55.648Z --> received didChange | language: markdown | contentVersion: 1093 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:55.899Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":20,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":215} + +APP 2025-04-04T19:51:56.027Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:51:56.101Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I cou\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1093} + +APP 2025-04-04T19:51:56.102Z --> calling completion event + +APP 2025-04-04T19:51:56.102Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:51:56.102Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:51:56.102Z --> copilot | completion request + +APP 2025-04-04T19:51:56.102Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:51:56.316Z --> received didChange | language: markdown | contentVersion: 1094 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:56.426Z --> received didChange | language: markdown | contentVersion: 1095 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:56.526Z --> received didChange | language: markdown | contentVersion: 1096 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:56.637Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:51:56.638Z --> completion hints: | ld not get `ubench` to run on Rocky Linux 9. It seems to be a bit outdated and not maintained anymore. I tried to compile it from source, but it failed with some errors. I will try to get it running in the future. + +APP 2025-04-04T19:51:56.638Z --> sent request | {"jsonrpc":"2.0","id":215,"result":{"isIncomplete":false,"items":[{"label":"ld not get `ubench` to run on Rocky Linux 9. It seems to be a bit outdated and not maintained anymore. I tried to compile it from source, but it failed with some errors. I will try to get it running in the future.","kind":1,"preselect":true,"detail":"ld not get `ubench` to run on Rocky Linux 9. It seems to be a bit outdated and not maintained anymore. I tried to compile it from source, but it failed with some errors. I will try to get it running in the future.","insertText":"ld not get `ubench` to run on Rocky Linux 9. It seems to be a bit outdated and not maintained anymore. I tried to compile it from source, but it failed with some errors. I will try to get it running in the future.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":407,"character":233},"end":{"line":407,"character":233}}}]}]}} + +APP 2025-04-04T19:51:56.638Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:51:57.430Z --> received didChange | language: markdown | contentVersion: 1097 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:57.627Z --> received didChange | language: markdown | contentVersion: 1098 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:57.808Z --> received didChange | language: markdown | contentVersion: 1099 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:57.992Z --> received didChange | language: markdown | contentVersion: 1100 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:58.174Z --> received didChange | language: markdown | contentVersion: 1101 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:58.348Z --> received didChange | language: markdown | contentVersion: 1102 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:58.503Z --> received didChange | language: markdown | contentVersion: 1103 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:58.594Z --> received didChange | language: markdown | contentVersion: 1104 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:58.728Z --> received didChange | language: markdown | contentVersion: 1105 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:58.855Z --> received didChange | language: markdown | contentVersion: 1106 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:59.056Z --> received didChange | language: markdown | contentVersion: 1107 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:59.107Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":22,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":216} + +APP 2025-04-04T19:51:59.185Z --> received didChange | language: markdown | contentVersion: 1108 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:59.274Z --> received didChange | language: markdown | contentVersion: 1109 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:59.279Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":24,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":217} + +APP 2025-04-04T19:51:59.397Z --> received didChange | language: markdown | contentVersion: 1110 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:59.481Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't a\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1110} + +APP 2025-04-04T19:51:59.481Z --> skipping because content is stale + +APP 2025-04-04T19:51:59.481Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:51:59.481Z --> sent request | {"jsonrpc":"2.0","id":217,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:51:59.508Z --> received didChange | language: markdown | contentVersion: 1111 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:59.708Z --> received didChange | language: markdown | contentVersion: 1112 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:59.759Z --> received didChange | language: markdown | contentVersion: 1113 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:59.852Z --> received didChange | language: markdown | contentVersion: 1114 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:51:59.859Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":29,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":218} + +APP 2025-04-04T19:51:59.973Z --> received didChange | language: markdown | contentVersion: 1115 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:00.060Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able t\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1115} + +APP 2025-04-04T19:52:00.060Z --> skipping because content is stale + +APP 2025-04-04T19:52:00.060Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:00.060Z --> sent request | {"jsonrpc":"2.0","id":218,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:00.097Z --> received didChange | language: markdown | contentVersion: 1116 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:00.162Z --> failed to parse line: | failed to parse | Content-Length: 16219
+
+ + +APP 2025-04-04T19:52:00.163Z --> received didChange | language: markdown | contentVersion: 1117 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:00.168Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":32,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":219} + +APP 2025-04-04T19:52:00.369Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1117} + +APP 2025-04-04T19:52:00.369Z --> calling completion event + +APP 2025-04-04T19:52:00.369Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:00.369Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:00.369Z --> copilot | completion request + +APP 2025-04-04T19:52:00.369Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:52:00.904Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:52:00.907Z --> completion hints: | run `ubench` on Rocky Linux 9. The package is not available in the default repositories, and I couldn't find a way to install it. However, I did run the `sillybench` benchmark again, and the results were similar to the previous runs. + +APP 2025-04-04T19:52:00.907Z --> sent request | {"jsonrpc":"2.0","id":219,"result":{"isIncomplete":false,"items":[{"label":"run `ubench` on Rocky Linux 9. The package is not available in the default repositories, and I couldn't find a way to install it. However, I did run the `sillybench` benchmark again, and the results were similar to the previous runs.","kind":1,"preselect":true,"detail":"run `ubench` on Rocky Linux 9. The package is not available in the default repositories, and I couldn't find a way to install it. However, I did run the `sillybench` benchmark again, and the results were similar to the previous runs.","insertText":"run `ubench` on Rocky Linux 9. The package is not available in the default repositories, and I couldn't find a way to install it. However, I did run the `sillybench` benchmark again, and the results were similar to the previous runs.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":407,"character":265},"end":{"line":407,"character":265}}}]}]}} + +APP 2025-04-04T19:52:00.907Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:00.952Z --> received didChange | language: markdown | contentVersion: 1118 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:00.953Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:01.054Z --> received didChange | language: markdown | contentVersion: 1119 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:01.119Z --> received didChange | language: markdown | contentVersion: 1120 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:01.181Z --> received didChange | language: markdown | contentVersion: 1121 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:01.397Z --> received didChange | language: markdown | contentVersion: 1122 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:01.403Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":37,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":220} + +APP 2025-04-04T19:52:01.596Z --> received didChange | language: markdown | contentVersion: 1123 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:01.603Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find ``\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1123} + +APP 2025-04-04T19:52:01.603Z --> skipping because content is stale + +APP 2025-04-04T19:52:01.603Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:01.603Z --> sent request | {"jsonrpc":"2.0","id":220,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:01.787Z --> received didChange | language: markdown | contentVersion: 1124 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:02.019Z --> received didChange | language: markdown | contentVersion: 1125 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:02.124Z --> received didChange | language: markdown | contentVersion: 1126 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:02.274Z --> received didChange | language: markdown | contentVersion: 1127 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:02.404Z --> received didChange | language: markdown | contentVersion: 1128 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:02.492Z --> received didChange | language: markdown | contentVersion: 1129 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:03.150Z --> received didChange | language: markdown | contentVersion: 1130 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:03.155Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":46,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":221} + +APP 2025-04-04T19:52:03.357Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1130} + +APP 2025-04-04T19:52:03.357Z --> calling completion event + +APP 2025-04-04T19:52:03.357Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:03.357Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:03.357Z --> copilot | completion request + +APP 2025-04-04T19:52:03.358Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:52:03.371Z --> received didChange | language: markdown | contentVersion: 1131 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:03.450Z --> received didChange | language: markdown | contentVersion: 1132 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:03.487Z --> received didChange | language: markdown | contentVersion: 1133 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:03.493Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":49,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":222} + +APP 2025-04-04T19:52:03.695Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1133} + +APP 2025-04-04T19:52:03.695Z --> calling completion event + +APP 2025-04-04T19:52:03.695Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:03.695Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:03.695Z --> copilot | completion request + +APP 2025-04-04T19:52:03.696Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:52:03.696Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:03.696Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:04.173Z --> received didChange | language: markdown | contentVersion: 1134 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:04.351Z --> received didChange | language: markdown | contentVersion: 1135 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:04.523Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:52:04.524Z --> completion hints: | the Rocky Linux repositories, so I couldn't run the same benchmark in the VMs. But I am sure that the results would be similar to the Go benchmarks. + +APP 2025-04-04T19:52:04.524Z --> sent request | {"jsonrpc":"2.0","id":222,"result":{"isIncomplete":false,"items":[{"label":"the Rocky Linux repositories, so I couldn't run the same benchmark in the VMs. But I am sure that the results would be similar to the Go benchmarks.","kind":1,"preselect":true,"detail":"the Rocky Linux repositories, so I couldn't run the same benchmark in the VMs. But I am sure that the results would be similar to the Go benchmarks.","insertText":"the Rocky Linux repositories, so I couldn't run the same benchmark in the VMs. But I am sure that the results would be similar to the Go benchmarks.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":407,"character":197},"end":{"line":407,"character":197}}}]}]}} + +APP 2025-04-04T19:52:04.524Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:04.535Z --> received didChange | language: markdown | contentVersion: 1136 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:05.502Z --> received didChange | language: markdown | contentVersion: 1137 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:05.578Z --> received didChange | language: markdown | contentVersion: 1138 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:05.609Z --> received didChange | language: markdown | contentVersion: 1139 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:05.615Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":49,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":223} + +APP 2025-04-04T19:52:05.816Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1139} + +APP 2025-04-04T19:52:05.816Z --> calling completion event + +APP 2025-04-04T19:52:05.816Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:05.816Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:05.816Z --> copilot | completion request + +APP 2025-04-04T19:52:05.817Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:52:06.103Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:06.103Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:06.103Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:06.468Z --> received didChange | language: markdown | contentVersion: 1140 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:06.555Z --> received didChange | language: markdown | contentVersion: 1141 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:06.752Z --> received didChange | language: markdown | contentVersion: 1142 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:06.816Z --> received didChange | language: markdown | contentVersion: 1143 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:06.822Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":53,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":224} + +APP 2025-04-04T19:52:06.995Z --> received didChange | language: markdown | contentVersion: 1144 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:07.023Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any o\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1144} + +APP 2025-04-04T19:52:07.023Z --> skipping because content is stale + +APP 2025-04-04T19:52:07.023Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:07.023Z --> sent request | {"jsonrpc":"2.0","id":224,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:07.062Z --> received didChange | language: markdown | contentVersion: 1145 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:07.172Z --> received didChange | language: markdown | contentVersion: 1146 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:07.178Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":56,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":225} + +APP 2025-04-04T19:52:07.319Z --> received didChange | language: markdown | contentVersion: 1147 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:07.379Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of t\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1147} + +APP 2025-04-04T19:52:07.379Z --> skipping because content is stale + +APP 2025-04-04T19:52:07.379Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:07.379Z --> sent request | {"jsonrpc":"2.0","id":225,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:07.724Z --> received didChange | language: markdown | contentVersion: 1148 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:07.802Z --> received didChange | language: markdown | contentVersion: 1149 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:07.904Z --> received didChange | language: markdown | contentVersion: 1150 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:07.910Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":60,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":226} + +APP 2025-04-04T19:52:08.048Z --> received didChange | language: markdown | contentVersion: 1151 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:08.072Z --> received didChange | language: markdown | contentVersion: 1152 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:08.111Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the ro\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1152} + +APP 2025-04-04T19:52:08.111Z --> skipping because content is stale + +APP 2025-04-04T19:52:08.111Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:08.111Z --> sent request | {"jsonrpc":"2.0","id":226,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:08.208Z --> received didChange | language: markdown | contentVersion: 1153 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:08.279Z --> received didChange | language: markdown | contentVersion: 1154 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:08.478Z --> received didChange | language: markdown | contentVersion: 1155 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:08.566Z --> received didChange | language: markdown | contentVersion: 1156 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:08.572Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":66,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":227} + +APP 2025-04-04T19:52:08.773Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the rocky \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1156} + +APP 2025-04-04T19:52:08.774Z --> calling completion event + +APP 2025-04-04T19:52:08.774Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:08.774Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:08.774Z --> copilot | completion request + +APP 2025-04-04T19:52:08.774Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:52:08.831Z --> received didChange | language: markdown | contentVersion: 1157 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:08.996Z --> received didChange | language: markdown | contentVersion: 1158 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:09.173Z --> received didChange | language: markdown | contentVersion: 1159 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:09.362Z --> received didChange | language: markdown | contentVersion: 1160 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:09.408Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:52:09.408Z --> completion hints: | Linux repositories. I tried to compile it from source, but it didn't work out of the box. I will try to find a way to get it running in the future. + +APP 2025-04-04T19:52:09.409Z --> sent request | {"jsonrpc":"2.0","id":227,"result":{"isIncomplete":false,"items":[{"label":"Linux repositories. I tried to compile it from source, but it didn't work out of the box. I will try to find a way to get it running in the future.","kind":1,"preselect":true,"detail":"Linux repositories. I tried to compile it from source, but it didn't work out of the box. I will try to find a way to get it running in the future.","insertText":"Linux repositories. I tried to compile it from source, but it didn't work out of the box. I will try to find a way to get it running in the future.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":407,"character":213},"end":{"line":407,"character":213}}}]}]}} + +APP 2025-04-04T19:52:09.409Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:09.538Z --> received didChange | language: markdown | contentVersion: 1161 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:09.703Z --> received didChange | language: markdown | contentVersion: 1162 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:09.780Z --> received didChange | language: markdown | contentVersion: 1163 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:09.867Z --> received didChange | language: markdown | contentVersion: 1164 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:10.010Z --> received didChange | language: markdown | contentVersion: 1165 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:10.261Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":63,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":228} + +APP 2025-04-04T19:52:10.370Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:10.463Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Roc\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1165} + +APP 2025-04-04T19:52:10.463Z --> calling completion event + +APP 2025-04-04T19:52:10.463Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:10.463Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:10.463Z --> copilot | completion request + +APP 2025-04-04T19:52:10.464Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:52:10.576Z --> received didChange | language: markdown | contentVersion: 1166 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:10.893Z --> received didChange | language: markdown | contentVersion: 1167 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:10.972Z --> received didChange | language: markdown | contentVersion: 1168 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:10.977Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":66,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":229} + +APP 2025-04-04T19:52:11.148Z --> received didChange | language: markdown | contentVersion: 1169 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:11.179Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky L\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1169} + +APP 2025-04-04T19:52:11.179Z --> skipping because content is stale + +APP 2025-04-04T19:52:11.179Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:11.179Z --> sent request | {"jsonrpc":"2.0","id":229,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:11.179Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:52:11.180Z --> completion hints: | ky Linux repositories. I tried to compile it from source, but it failed with a missing `libc.so` dependency. I also tried to install the `glibc-devel` package, but it didn't help. + +APP 2025-04-04T19:52:11.180Z --> sent request | {"jsonrpc":"2.0","id":228,"result":{"isIncomplete":false,"items":[{"label":"ky Linux repositories. I tried to compile it from source, but it failed with a missing `libc.so` dependency. I also tried to install the `glibc-devel` package, but it didn't help.","kind":1,"preselect":true,"detail":"ky Linux repositories. I tried to compile it from source, but it failed with a missing `libc.so` dependency. I also tried to install the `glibc-devel` package, but it didn't help.","insertText":"ky Linux repositories. I tried to compile it from source, but it failed with a missing `libc.so` dependency. I also tried to install the `glibc-devel` package, but it didn't help.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":407,"character":242},"end":{"line":407,"character":242}}}]}]}} + +APP 2025-04-04T19:52:11.180Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:11.218Z --> received didChange | language: markdown | contentVersion: 1170 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:11.448Z --> received didChange | language: markdown | contentVersion: 1171 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:11.518Z --> received didChange | language: markdown | contentVersion: 1172 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:11.606Z --> received didChange | language: markdown | contentVersion: 1173 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:11.711Z --> received didChange | language: markdown | contentVersion: 1174 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:11.717Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":72,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":230} + +APP 2025-04-04T19:52:11.864Z --> received didChange | language: markdown | contentVersion: 1175 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:11.902Z --> received didChange | language: markdown | contentVersion: 1176 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:11.918Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux re\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1176} + +APP 2025-04-04T19:52:11.918Z --> skipping because content is stale + +APP 2025-04-04T19:52:11.919Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:11.919Z --> sent request | {"jsonrpc":"2.0","id":230,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:11.991Z --> received didChange | language: markdown | contentVersion: 1177 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:12.162Z --> received didChange | language: markdown | contentVersion: 1178 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:12.262Z --> received didChange | language: markdown | contentVersion: 1179 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:12.376Z --> received didChange | language: markdown | contentVersion: 1180 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:12.510Z --> received didChange | language: markdown | contentVersion: 1181 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:12.629Z --> received didChange | language: markdown | contentVersion: 1182 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:12.743Z --> received didChange | language: markdown | contentVersion: 1183 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:12.994Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":81,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":231} + +APP 2025-04-04T19:52:13.118Z --> received didChange | language: markdown | contentVersion: 1184 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:13.195Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositori\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1184} + +APP 2025-04-04T19:52:13.195Z --> skipping because content is stale + +APP 2025-04-04T19:52:13.195Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:13.195Z --> sent request | {"jsonrpc":"2.0","id":231,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:13.238Z --> received didChange | language: markdown | contentVersion: 1185 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:13.358Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:13.410Z --> received didChange | language: markdown | contentVersion: 1186 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:13.520Z --> received didChange | language: markdown | contentVersion: 1187 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:13.696Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:13.696Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:15.169Z --> received didChange | language: markdown | contentVersion: 1188 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:15.174Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":86,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":232} + +APP 2025-04-04T19:52:15.344Z --> received didChange | language: markdown | contentVersion: 1189 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:15.374Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. S\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1189} + +APP 2025-04-04T19:52:15.374Z --> skipping because content is stale + +APP 2025-04-04T19:52:15.374Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:15.374Z --> sent request | {"jsonrpc":"2.0","id":232,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:15.453Z --> received didChange | language: markdown | contentVersion: 1190 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:15.494Z --> received didChange | language: markdown | contentVersion: 1191 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:15.500Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":89,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":233} + +APP 2025-04-04T19:52:15.702Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So \n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1191} + +APP 2025-04-04T19:52:15.702Z --> calling completion event + +APP 2025-04-04T19:52:15.702Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:15.702Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:15.702Z --> copilot | completion request + +APP 2025-04-04T19:52:15.702Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:52:15.819Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:15.820Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:15.820Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:16.103Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:16.756Z --> received didChange | language: markdown | contentVersion: 1192 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:16.882Z --> received didChange | language: markdown | contentVersion: 1193 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:16.887Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":91,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":234} + +APP 2025-04-04T19:52:16.962Z --> received didChange | language: markdown | contentVersion: 1194 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:17.056Z --> received didChange | language: markdown | contentVersion: 1195 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:17.088Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I sk\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1195} + +APP 2025-04-04T19:52:17.088Z --> skipping because content is stale + +APP 2025-04-04T19:52:17.088Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:17.088Z --> sent request | {"jsonrpc":"2.0","id":234,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:17.270Z --> received didChange | language: markdown | contentVersion: 1196 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:17.509Z --> received didChange | language: markdown | contentVersion: 1197 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:17.682Z --> received didChange | language: markdown | contentVersion: 1198 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:17.933Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":96,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":235} + +APP 2025-04-04T19:52:18.133Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipp\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1198} + +APP 2025-04-04T19:52:18.134Z --> calling completion event + +APP 2025-04-04T19:52:18.134Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:18.134Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:18.134Z --> copilot | completion request + +APP 2025-04-04T19:52:18.135Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:52:18.580Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:52:18.581Z --> completion hints: | ed this part. + +APP 2025-04-04T19:52:18.581Z --> sent request | {"jsonrpc":"2.0","id":235,"result":{"isIncomplete":false,"items":[{"label":"ed this part.","kind":1,"preselect":true,"detail":"ed this part.","insertText":"ed this part.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":407,"character":109},"end":{"line":407,"character":109}}}]}]}} + +APP 2025-04-04T19:52:18.581Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:18.775Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:19.414Z --> received didChange | language: markdown | contentVersion: 1199 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:19.497Z --> received didChange | language: markdown | contentVersion: 1200 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:19.748Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":98,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":236} + +APP 2025-04-04T19:52:19.948Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1200} + +APP 2025-04-04T19:52:19.949Z --> calling completion event + +APP 2025-04-04T19:52:19.949Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}}}] + +APP 2025-04-04T19:52:19.949Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":407,"character":0},"end":{"line":408,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:19.949Z --> copilot | completion request + +APP 2025-04-04T19:52:19.950Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:52:20.292Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:52:20.292Z --> completion hints: | this part. + +APP 2025-04-04T19:52:20.292Z --> sent request | {"jsonrpc":"2.0","id":236,"result":{"isIncomplete":false,"items":[{"label":"this part.","kind":1,"preselect":true,"detail":"this part.","insertText":"this part.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":407,"character":108},"end":{"line":407,"character":108}}}]}]}} + +APP 2025-04-04T19:52:20.292Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:20.464Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:20.530Z --> received didChange | language: markdown | contentVersion: 1201 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:20.536Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":99,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":237} + +APP 2025-04-04T19:52:20.666Z --> received didChange | language: markdown | contentVersion: 1202 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:20.737Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped t\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1202} + +APP 2025-04-04T19:52:20.737Z --> skipping because content is stale + +APP 2025-04-04T19:52:20.737Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:20.737Z --> sent request | {"jsonrpc":"2.0","id":237,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:20.767Z --> received didChange | language: markdown | contentVersion: 1203 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:20.805Z --> received didChange | language: markdown | contentVersion: 1204 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:20.935Z --> received didChange | language: markdown | contentVersion: 1205 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:21.014Z --> received didChange | language: markdown | contentVersion: 1206 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:21.020Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":104,"line":407},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":238} + +APP 2025-04-04T19:52:21.147Z --> received didChange | language: markdown | contentVersion: 1207 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:21.208Z --> received didChange | language: markdown | contentVersion: 1208 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:21.221Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this te\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1208} + +APP 2025-04-04T19:52:21.221Z --> skipping because content is stale + +APP 2025-04-04T19:52:21.221Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:21.221Z --> sent request | {"jsonrpc":"2.0","id":238,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:21.401Z --> received didChange | language: markdown | contentVersion: 1209 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:21.584Z --> received didChange | language: markdown | contentVersion: 1210 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:21.684Z --> received didChange | language: markdown | contentVersion: 1211 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:25.703Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:25.821Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:26.063Z --> received didChange | language: markdown | contentVersion: 1212 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:26.229Z --> received didChange | language: markdown | contentVersion: 1213 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:26.332Z --> received didChange | language: markdown | contentVersion: 1214 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:26.827Z --> received didChange | language: markdown | contentVersion: 1215 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:27.434Z --> received didChange | language: markdown | contentVersion: 1216 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:27.440Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":1,"line":409},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":239} + +APP 2025-04-04T19:52:27.566Z --> received didChange | language: markdown | contentVersion: 1217 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:27.572Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":409},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":240} + +APP 2025-04-04T19:52:27.774Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n##\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1217} + +APP 2025-04-04T19:52:27.774Z --> calling completion event + +APP 2025-04-04T19:52:27.774Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":409,"character":0},"end":{"line":410,"character":0}}}] + +APP 2025-04-04T19:52:27.775Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":409,"character":0},"end":{"line":410,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:52:27.775Z --> copilot | completion request + +APP 2025-04-04T19:52:27.775Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:52:27.807Z --> received didChange | language: markdown | contentVersion: 1218 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:27.812Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":3,"line":409},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":241} + +APP 2025-04-04T19:52:27.966Z --> received didChange | language: markdown | contentVersion: 1219 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:28.014Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has fewer CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## C\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1219} + +APP 2025-04-04T19:52:28.014Z --> skipping because content is stale + +APP 2025-04-04T19:52:28.014Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:28.014Z --> sent request | {"jsonrpc":"2.0","id":241,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:52:28.034Z --> received didChange | language: markdown | contentVersion: 1220 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:28.074Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:52:28.075Z --> completion hints: | # Conclusion + +APP 2025-04-04T19:52:28.075Z --> sent request | {"jsonrpc":"2.0","id":240,"result":{"isIncomplete":false,"items":[{"label":"# Conclusion","kind":1,"preselect":true,"detail":"# Conclusion","insertText":"# Conclusion","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":409,"character":14},"end":{"line":409,"character":14}}}]}]}} + +APP 2025-04-04T19:52:28.075Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:28.135Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:28.220Z --> received didChange | language: markdown | contentVersion: 1221 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:28.402Z --> received didChange | language: markdown | contentVersion: 1222 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:28.503Z --> received didChange | language: markdown | contentVersion: 1223 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:28.696Z --> received didChange | language: markdown | contentVersion: 1224 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:28.809Z --> received didChange | language: markdown | contentVersion: 1225 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:28.944Z --> received didChange | language: markdown | contentVersion: 1226 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:29.039Z --> received didChange | language: markdown | contentVersion: 1227 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:29.226Z --> received didChange | language: markdown | contentVersion: 1228 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:52:29.950Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:52:37.775Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:55:42.094Z --> received didChange | language: markdown | contentVersion: 1229 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:55:43.555Z --> received didChange | language: markdown | contentVersion: 1230 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:55:44.724Z --> received didChange | language: markdown | contentVersion: 1231 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:26.810Z --> received didChange | language: markdown | contentVersion: 1232 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:26.920Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":293,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":242} + +APP 2025-04-04T19:57:27.122Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has CPUs than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1232} + +APP 2025-04-04T19:57:27.122Z --> calling completion event + +APP 2025-04-04T19:57:27.122Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}}}] + +APP 2025-04-04T19:57:27.122Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":368,"character":0},"end":{"line":369,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:57:27.122Z --> copilot | completion request + +APP 2025-04-04T19:57:27.123Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:57:27.153Z --> received didChange | language: markdown | contentVersion: 1233 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:27.224Z --> received didChange | language: markdown | contentVersion: 1234 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:27.440Z --> received didChange | language: markdown | contentVersion: 1235 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:27.593Z --> received didChange | language: markdown | contentVersion: 1236 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:27.905Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:57:27.907Z --> completion hints: | 4 CPUs, and we want to see how it performs with one CPU only. The second run is done without the `-s` option, which means all CPUs are used. + +APP 2025-04-04T19:57:27.907Z --> sent request | {"jsonrpc":"2.0","id":242,"result":{"isIncomplete":false,"items":[{"label":"4 CPUs, and we want to see how it performs with one CPU only. The second run is done without the `-s` option, which means all CPUs are used.","kind":1,"preselect":true,"detail":"4 CPUs, and we want to see how it performs with one CPU only. The second run is done without the `-s` option, which means all CPUs are used.","insertText":"4 CPUs, and we want to see how it performs with one CPU only. The second run is done without the `-s` option, which means all CPUs are used.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":368,"character":433},"end":{"line":368,"character":511}}}]}]}} + +APP 2025-04-04T19:57:27.907Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:57:31.738Z --> received didChange | language: markdown | contentVersion: 1237 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:31.929Z --> received didChange | language: markdown | contentVersion: 1238 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:31.935Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":302,"line":368},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":243} + +APP 2025-04-04T19:57:32.106Z --> received didChange | language: markdown | contentVersion: 1239 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:32.137Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s', as the VM has less CPU a than the host system, and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1239} + +APP 2025-04-04T19:57:32.137Z --> skipping because content is stale + +APP 2025-04-04T19:57:32.137Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:57:32.137Z --> sent request | {"jsonrpc":"2.0","id":243,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:57:32.357Z --> received didChange | language: markdown | contentVersion: 1240 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:32.556Z --> received didChange | language: markdown | contentVersion: 1241 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:32.644Z --> received didChange | language: markdown | contentVersion: 1242 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:32.865Z --> received didChange | language: markdown | contentVersion: 1243 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:32.948Z --> received didChange | language: markdown | contentVersion: 1244 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:33.087Z --> received didChange | language: markdown | contentVersion: 1245 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:33.304Z --> received didChange | language: markdown | contentVersion: 1246 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:33.352Z --> received didChange | language: markdown | contentVersion: 1247 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:37.123Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:57:40.671Z --> received didChange | language: markdown | contentVersion: 1248 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:57:44.789Z --> received didChange | language: markdown | contentVersion: 1249 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:58:16.833Z --> received didChange | language: markdown | contentVersion: 1250 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:58:17.198Z --> received didChange | language: markdown | contentVersion: 1251 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:58:18.142Z --> received didChange | language: markdown | contentVersion: 1252 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:45.383Z --> received didChange | language: markdown | contentVersion: 1253 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:45.554Z --> received didChange | language: markdown | contentVersion: 1254 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:45.687Z --> received didChange | language: markdown | contentVersion: 1255 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:45.829Z --> received didChange | language: markdown | contentVersion: 1256 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:45.941Z --> received didChange | language: markdown | contentVersion: 1257 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:46.076Z --> received didChange | language: markdown | contentVersion: 1258 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:46.136Z --> received didChange | language: markdown | contentVersion: 1259 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:46.209Z --> received didChange | language: markdown | contentVersion: 1260 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:46.342Z --> received didChange | language: markdown | contentVersion: 1261 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:46.348Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":7,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":244} + +APP 2025-04-04T19:59:46.457Z --> received didChange | language: markdown | contentVersion: 1262 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:46.549Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring t\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1262} + +APP 2025-04-04T19:59:46.550Z --> skipping because content is stale + +APP 2025-04-04T19:59:46.550Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:59:46.550Z --> sent request | {"jsonrpc":"2.0","id":244,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:59:46.552Z --> received didChange | language: markdown | contentVersion: 1263 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:46.646Z --> received didChange | language: markdown | contentVersion: 1264 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:46.759Z --> received didChange | language: markdown | contentVersion: 1265 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:46.765Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":11,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":245} + +APP 2025-04-04T19:59:46.966Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the \n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1265} + +APP 2025-04-04T19:59:46.966Z --> calling completion event + +APP 2025-04-04T19:59:46.966Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T19:59:46.967Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:59:46.967Z --> copilot | completion request + +APP 2025-04-04T19:59:46.967Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:59:47.579Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:59:47.580Z --> completion hints: | benchmark, I noticed that the CPU usage was at 100% for all 4 CPUs. The benchmark took about 1.5 seconds to complete. + +APP 2025-04-04T19:59:47.581Z --> sent request | {"jsonrpc":"2.0","id":245,"result":{"isIncomplete":false,"items":[{"label":"benchmark, I noticed that the CPU usage was at 100% for all 4 CPUs. The benchmark took about 1.5 seconds to complete.","kind":1,"preselect":true,"detail":"benchmark, I noticed that the CPU usage was at 100% for all 4 CPUs. The benchmark took about 1.5 seconds to complete.","insertText":"benchmark, I noticed that the CPU usage was at 100% for all 4 CPUs. The benchmark took about 1.5 seconds to complete.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":425,"character":128},"end":{"line":425,"character":128}}}]}]}} + +APP 2025-04-04T19:59:47.581Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:59:48.719Z --> received didChange | language: markdown | contentVersion: 1266 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:48.771Z --> received didChange | language: markdown | contentVersion: 1267 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:48.923Z --> received didChange | language: markdown | contentVersion: 1268 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:48.993Z --> received didChange | language: markdown | contentVersion: 1269 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:56.746Z --> received didChange | language: markdown | contentVersion: 1270 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:56.968Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:59:57.000Z --> received didChange | language: markdown | contentVersion: 1271 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:57.059Z --> received didChange | language: markdown | contentVersion: 1272 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:57.229Z --> received didChange | language: markdown | contentVersion: 1273 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:57.268Z --> received didChange | language: markdown | contentVersion: 1274 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:57.604Z --> received didChange | language: markdown | contentVersion: 1275 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:57.682Z --> received didChange | language: markdown | contentVersion: 1276 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:57.688Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":22,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":246} + +APP 2025-04-04T19:59:57.877Z --> received didChange | language: markdown | contentVersion: 1277 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:57.889Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1277} + +APP 2025-04-04T19:59:57.889Z --> skipping because content is stale + +APP 2025-04-04T19:59:57.890Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:59:57.890Z --> sent request | {"jsonrpc":"2.0","id":246,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T19:59:57.977Z --> received didChange | language: markdown | contentVersion: 1278 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:57.983Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":24,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":247} + +APP 2025-04-04T19:59:58.184Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I \n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1278} + +APP 2025-04-04T19:59:58.184Z --> calling completion event + +APP 2025-04-04T19:59:58.184Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T19:59:58.184Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:59:58.184Z --> copilot | completion request + +APP 2025-04-04T19:59:58.185Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:59:58.536Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T19:59:58.537Z --> completion hints: | had to stop the VMs, as they were consuming CPU resources. So I stopped all VMs and ran the benchmark again: + +APP 2025-04-04T19:59:58.538Z --> sent request | {"jsonrpc":"2.0","id":247,"result":{"isIncomplete":false,"items":[{"label":"had to stop the VMs, as they were consuming CPU resources. So I stopped all VMs and ran the benchmark again:","kind":1,"preselect":true,"detail":"had to stop the VMs, as they were consuming CPU resources. So I stopped all VMs and ran the benchmark again:","insertText":"had to stop the VMs, as they were consuming CPU resources. So I stopped all VMs and ran the benchmark again:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":425,"character":132},"end":{"line":425,"character":132}}}]}]}} + +APP 2025-04-04T19:59:58.538Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T19:59:58.625Z --> received didChange | language: markdown | contentVersion: 1279 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:58.806Z --> received didChange | language: markdown | contentVersion: 1280 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:58.957Z --> received didChange | language: markdown | contentVersion: 1281 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:59.082Z --> received didChange | language: markdown | contentVersion: 1282 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:59.217Z --> received didChange | language: markdown | contentVersion: 1283 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:59.257Z --> received didChange | language: markdown | contentVersion: 1284 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:59.476Z --> received didChange | language: markdown | contentVersion: 1285 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T19:59:59.727Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":31,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":248} + +APP 2025-04-04T19:59:59.928Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1285} + +APP 2025-04-04T19:59:59.928Z --> calling completion event + +APP 2025-04-04T19:59:59.928Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T19:59:59.928Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T19:59:59.928Z --> copilot | completion request + +APP 2025-04-04T19:59:59.929Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T19:59:59.996Z --> received didChange | language: markdown | contentVersion: 1286 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:00.002Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":32,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":249} + +APP 2025-04-04T20:00:00.204Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed \n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1286} + +APP 2025-04-04T20:00:00.204Z --> calling completion event + +APP 2025-04-04T20:00:00.204Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:00.205Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:00.205Z --> copilot | completion request + +APP 2025-04-04T20:00:00.205Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:00:00.205Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:00.205Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:00.789Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:00:00.790Z --> completion hints: | that the CPU usage was at 100% for the VM, which is expected. The benchmark results were: + +APP 2025-04-04T20:00:00.790Z --> sent request | {"jsonrpc":"2.0","id":249,"result":{"isIncomplete":false,"items":[{"label":"that the CPU usage was at 100% for the VM, which is expected. The benchmark results were:","kind":1,"preselect":true,"detail":"that the CPU usage was at 100% for the VM, which is expected. The benchmark results were:","insertText":"that the CPU usage was at 100% for the VM, which is expected. The benchmark results were:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":425,"character":121},"end":{"line":425,"character":121}}}]}]}} + +APP 2025-04-04T20:00:00.790Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:00.827Z --> received didChange | language: markdown | contentVersion: 1287 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:00.912Z --> received didChange | language: markdown | contentVersion: 1288 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:00.988Z --> received didChange | language: markdown | contentVersion: 1289 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:01.118Z --> received didChange | language: markdown | contentVersion: 1290 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:01.124Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":36,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":250} + +APP 2025-04-04T20:00:01.326Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the \n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1290} + +APP 2025-04-04T20:00:01.326Z --> calling completion event + +APP 2025-04-04T20:00:01.326Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:01.326Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:01.327Z --> copilot | completion request + +APP 2025-04-04T20:00:01.327Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:00:01.348Z --> received didChange | language: markdown | contentVersion: 1291 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:01.682Z --> received didChange | language: markdown | contentVersion: 1292 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:01.712Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:00:01.715Z --> completion hints: | CPU usage of the VM was around 20% (4 CPUs), so it seems to be using all available CPUs. The benchmark results were: + +APP 2025-04-04T20:00:01.715Z --> sent request | {"jsonrpc":"2.0","id":250,"result":{"isIncomplete":false,"items":[{"label":"CPU usage of the VM was around 20% (4 CPUs), so it seems to be using all available CPUs. The benchmark results were:","kind":1,"preselect":true,"detail":"CPU usage of the VM was around 20% (4 CPUs), so it seems to be using all available CPUs. The benchmark results were:","insertText":"CPU usage of the VM was around 20% (4 CPUs), so it seems to be using all available CPUs. The benchmark results were:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":425,"character":152},"end":{"line":425,"character":152}}}]}]}} + +APP 2025-04-04T20:00:01.715Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:01.941Z --> received didChange | language: markdown | contentVersion: 1293 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:02.243Z --> received didChange | language: markdown | contentVersion: 1294 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:02.392Z --> received didChange | language: markdown | contentVersion: 1295 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:02.397Z --> received didChange | language: markdown | contentVersion: 1296 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:02.648Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":42,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":251} + +APP 2025-04-04T20:00:02.849Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve`\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1296} + +APP 2025-04-04T20:00:02.849Z --> calling completion event + +APP 2025-04-04T20:00:02.849Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:02.849Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:02.849Z --> copilot | completion request + +APP 2025-04-04T20:00:02.850Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:00:03.348Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:00:03.349Z --> completion hints: | ` process using 100% CPU, which is expected as it is a CPU-intensive task. The benchmark results are similar to the FreeBSD host results, which is a good sign. + +APP 2025-04-04T20:00:03.350Z --> sent request | {"jsonrpc":"2.0","id":251,"result":{"isIncomplete":false,"items":[{"label":"` process using 100% CPU, which is expected as it is a CPU-intensive task. The benchmark results are similar to the FreeBSD host results, which is a good sign.","kind":1,"preselect":true,"detail":"` process using 100% CPU, which is expected as it is a CPU-intensive task. The benchmark results are similar to the FreeBSD host results, which is a good sign.","insertText":"` process using 100% CPU, which is expected as it is a CPU-intensive task. The benchmark results are similar to the FreeBSD host results, which is a good sign.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":425,"character":201},"end":{"line":425,"character":202}}}]}]}} + +APP 2025-04-04T20:00:03.350Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:04.034Z --> received didChange | language: markdown | contentVersion: 1297 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:04.038Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":44,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":252} + +APP 2025-04-04T20:00:04.149Z --> received didChange | language: markdown | contentVersion: 1298 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:04.240Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` p\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1298} + +APP 2025-04-04T20:00:04.240Z --> skipping because content is stale + +APP 2025-04-04T20:00:04.240Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:04.240Z --> sent request | {"jsonrpc":"2.0","id":252,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:04.249Z --> received didChange | language: markdown | contentVersion: 1299 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:04.406Z --> received didChange | language: markdown | contentVersion: 1300 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:04.500Z --> received didChange | language: markdown | contentVersion: 1301 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:04.548Z --> received didChange | language: markdown | contentVersion: 1302 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:04.756Z --> received didChange | language: markdown | contentVersion: 1303 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:04.918Z --> received didChange | language: markdown | contentVersion: 1304 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:04.994Z --> received didChange | language: markdown | contentVersion: 1305 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:05.000Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":52,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":253} + +APP 2025-04-04T20:00:05.034Z --> received didChange | language: markdown | contentVersion: 1306 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:05.198Z --> received didChange | language: markdown | contentVersion: 1307 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:05.200Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1307} + +APP 2025-04-04T20:00:05.201Z --> skipping because content is stale + +APP 2025-04-04T20:00:05.201Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:05.201Z --> sent request | {"jsonrpc":"2.0","id":253,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:05.261Z --> received didChange | language: markdown | contentVersion: 1308 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:05.266Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":55,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":254} + +APP 2025-04-04T20:00:05.328Z --> received didChange | language: markdown | contentVersion: 1309 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:05.451Z --> received didChange | language: markdown | contentVersion: 1310 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:05.467Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on th\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1310} + +APP 2025-04-04T20:00:05.468Z --> skipping because content is stale + +APP 2025-04-04T20:00:05.468Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:05.468Z --> sent request | {"jsonrpc":"2.0","id":254,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:05.529Z --> received didChange | language: markdown | contentVersion: 1311 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:05.640Z --> received didChange | language: markdown | contentVersion: 1312 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:05.646Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":59,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":255} + +APP 2025-04-04T20:00:05.765Z --> received didChange | language: markdown | contentVersion: 1313 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:05.847Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the h\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1313} + +APP 2025-04-04T20:00:05.847Z --> skipping because content is stale + +APP 2025-04-04T20:00:05.847Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:05.847Z --> sent request | {"jsonrpc":"2.0","id":255,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:05.949Z --> received didChange | language: markdown | contentVersion: 1314 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:06.053Z --> received didChange | language: markdown | contentVersion: 1315 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:06.262Z --> received didChange | language: markdown | contentVersion: 1316 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:06.513Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":63,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":256} + +APP 2025-04-04T20:00:06.714Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1316} + +APP 2025-04-04T20:00:06.714Z --> calling completion event + +APP 2025-04-04T20:00:06.715Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:06.715Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:06.715Z --> copilot | completion request + +APP 2025-04-04T20:00:06.715Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:00:06.954Z --> received didChange | language: markdown | contentVersion: 1317 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:06.960Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":64,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":257} + +APP 2025-04-04T20:00:07.139Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:00:07.140Z --> completion hints: | system was using 100% CPU, which is expected as the benchmark is running in a VM. The results were: + +APP 2025-04-04T20:00:07.140Z --> sent request | {"jsonrpc":"2.0","id":256,"result":{"isIncomplete":false,"items":[{"label":"system was using 100% CPU, which is expected as the benchmark is running in a VM. The results were:","kind":1,"preselect":true,"detail":"system was using 100% CPU, which is expected as the benchmark is running in a VM. The results were:","insertText":"system was using 100% CPU, which is expected as the benchmark is running in a VM. The results were:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":425,"character":162},"end":{"line":425,"character":162}}}]}]}} + +APP 2025-04-04T20:00:07.140Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:07.161Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host \n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1317} + +APP 2025-04-04T20:00:07.162Z --> calling completion event + +APP 2025-04-04T20:00:07.162Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:07.162Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:07.162Z --> copilot | completion request + +APP 2025-04-04T20:00:07.162Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:00:07.179Z --> received didChange | language: markdown | contentVersion: 1318 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:07.231Z --> received didChange | language: markdown | contentVersion: 1319 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:07.428Z --> received didChange | language: markdown | contentVersion: 1320 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:07.649Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:00:07.650Z --> completion hints: | system was using 100% CPU. This is expected as the benchmark is running in a VM with 4 vCPUs, and the host system is not doing anything else. + +APP 2025-04-04T20:00:07.650Z --> sent request | {"jsonrpc":"2.0","id":257,"result":{"isIncomplete":false,"items":[{"label":"system was using 100% CPU. This is expected as the benchmark is running in a VM with 4 vCPUs, and the host system is not doing anything else.","kind":1,"preselect":true,"detail":"system was using 100% CPU. This is expected as the benchmark is running in a VM with 4 vCPUs, and the host system is not doing anything else.","insertText":"system was using 100% CPU. This is expected as the benchmark is running in a VM with 4 vCPUs, and the host system is not doing anything else.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":425,"character":205},"end":{"line":425,"character":205}}}]}]}} + +APP 2025-04-04T20:00:07.650Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:07.736Z --> received didChange | language: markdown | contentVersion: 1321 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:07.741Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":68,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":258} + +APP 2025-04-04T20:00:07.851Z --> received didChange | language: markdown | contentVersion: 1322 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:07.915Z --> received didChange | language: markdown | contentVersion: 1323 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:07.944Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was co\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1323} + +APP 2025-04-04T20:00:07.944Z --> skipping because content is stale + +APP 2025-04-04T20:00:07.944Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:07.944Z --> sent request | {"jsonrpc":"2.0","id":258,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:08.099Z --> received didChange | language: markdown | contentVersion: 1324 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:08.150Z --> received didChange | language: markdown | contentVersion: 1325 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:08.184Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:08.357Z --> received didChange | language: markdown | contentVersion: 1326 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:08.520Z --> received didChange | language: markdown | contentVersion: 1327 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:08.680Z --> received didChange | language: markdown | contentVersion: 1328 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:08.823Z --> failed to parse line: | failed to parse | Content-Length: 17072
+
+ + +APP 2025-04-04T20:00:08.824Z --> received didChange | language: markdown | contentVersion: 1329 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:08.929Z --> received didChange | language: markdown | contentVersion: 1330 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:09.158Z --> received didChange | language: markdown | contentVersion: 1331 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:09.223Z --> received didChange | language: markdown | contentVersion: 1332 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:09.229Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":79,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":259} + +APP 2025-04-04T20:00:09.321Z --> received didChange | language: markdown | contentVersion: 1333 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:09.430Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly a\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1333} + +APP 2025-04-04T20:00:09.430Z --> skipping because content is stale + +APP 2025-04-04T20:00:09.430Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:09.430Z --> sent request | {"jsonrpc":"2.0","id":259,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:09.497Z --> received didChange | language: markdown | contentVersion: 1334 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:09.606Z --> received didChange | language: markdown | contentVersion: 1335 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:09.611Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":82,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":260} + +APP 2025-04-04T20:00:09.797Z --> received didChange | language: markdown | contentVersion: 1336 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:09.813Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at ``\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1336} + +APP 2025-04-04T20:00:09.813Z --> skipping because content is stale + +APP 2025-04-04T20:00:09.813Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:09.813Z --> sent request | {"jsonrpc":"2.0","id":260,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:09.928Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:10.205Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:10.205Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:10.631Z --> received didChange | language: markdown | contentVersion: 1337 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:11.327Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:11.372Z --> received didChange | language: markdown | contentVersion: 1338 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:11.538Z --> received didChange | language: markdown | contentVersion: 1339 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:11.789Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":86,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":261} + +APP 2025-04-04T20:00:11.990Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399`\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1339} + +APP 2025-04-04T20:00:11.990Z --> calling completion event + +APP 2025-04-04T20:00:11.990Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:11.990Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:11.990Z --> copilot | completion request + +APP 2025-04-04T20:00:11.990Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:00:12.027Z --> received didChange | language: markdown | contentVersion: 1340 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:12.462Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:00:12.463Z --> completion hints: | %` CPU usage, which is a bit odd. I am not sure if this is a bug or just the way Bhyve works. But it seems to be normal behavior. + +APP 2025-04-04T20:00:12.463Z --> sent request | {"jsonrpc":"2.0","id":261,"result":{"isIncomplete":false,"items":[{"label":"%` CPU usage, which is a bit odd. I am not sure if this is a bug or just the way Bhyve works. But it seems to be normal behavior.","kind":1,"preselect":true,"detail":"%` CPU usage, which is a bit odd. I am not sure if this is a bug or just the way Bhyve works. But it seems to be normal behavior.","insertText":"%` CPU usage, which is a bit odd. I am not sure if this is a bug or just the way Bhyve works. But it seems to be normal behavior.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":425,"character":215},"end":{"line":425,"character":216}}}]}]}} + +APP 2025-04-04T20:00:12.463Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:12.850Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:13.659Z --> received didChange | language: markdown | contentVersion: 1341 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:13.664Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":89,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":262} + +APP 2025-04-04T20:00:13.866Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` \n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1341} + +APP 2025-04-04T20:00:13.866Z --> calling completion event + +APP 2025-04-04T20:00:13.866Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:13.866Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:13.866Z --> copilot | completion request + +APP 2025-04-04T20:00:13.867Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:00:14.408Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:00:14.408Z --> completion hints: | CPU usage, which is a bit odd. I am not sure if this is a bug or a feature of the `ubench` tool. But it seems to be a bit too much for the CPU. + +APP 2025-04-04T20:00:14.409Z --> sent request | {"jsonrpc":"2.0","id":262,"result":{"isIncomplete":false,"items":[{"label":"CPU usage, which is a bit odd. I am not sure if this is a bug or a feature of the `ubench` tool. But it seems to be a bit too much for the CPU.","kind":1,"preselect":true,"detail":"CPU usage, which is a bit odd. I am not sure if this is a bug or a feature of the `ubench` tool. But it seems to be a bit too much for the CPU.","insertText":"CPU usage, which is a bit odd. I am not sure if this is a bug or a feature of the `ubench` tool. But it seems to be a bit too much for the CPU.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":425,"character":232},"end":{"line":425,"character":232}}}]}]}} + +APP 2025-04-04T20:00:14.409Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:14.705Z --> received didChange | language: markdown | contentVersion: 1342 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:14.823Z --> received didChange | language: markdown | contentVersion: 1343 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:15.050Z --> received didChange | language: markdown | contentVersion: 1344 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:15.134Z --> received didChange | language: markdown | contentVersion: 1345 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:15.141Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":93,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":263} + +APP 2025-04-04T20:00:15.306Z --> received didChange | language: markdown | contentVersion: 1346 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:15.342Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU u\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1346} + +APP 2025-04-04T20:00:15.342Z --> skipping because content is stale + +APP 2025-04-04T20:00:15.342Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:15.342Z --> sent request | {"jsonrpc":"2.0","id":263,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:15.398Z --> received didChange | language: markdown | contentVersion: 1347 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:15.492Z --> received didChange | language: markdown | contentVersion: 1348 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:15.734Z --> received didChange | language: markdown | contentVersion: 1349 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:15.781Z --> received didChange | language: markdown | contentVersion: 1350 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:16.032Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":98,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":264} + +APP 2025-04-04T20:00:16.233Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1350} + +APP 2025-04-04T20:00:16.233Z --> calling completion event + +APP 2025-04-04T20:00:16.233Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:16.233Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:16.233Z --> copilot | completion request + +APP 2025-04-04T20:00:16.234Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:00:16.247Z --> received didChange | language: markdown | contentVersion: 1351 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:16.254Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":99,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":265} + +APP 2025-04-04T20:00:16.415Z --> received didChange | language: markdown | contentVersion: 1352 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:16.420Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":"(","triggerKind":2},"position":{"character":100,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":266} + +APP 2025-04-04T20:00:16.621Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage ()\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1352} + +APP 2025-04-04T20:00:16.621Z --> calling completion event + +APP 2025-04-04T20:00:16.621Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:16.621Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:16.621Z --> copilot | completion request + +APP 2025-04-04T20:00:16.622Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:00:16.622Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:16.622Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:16.716Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:16.716Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:16.716Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:17.162Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:17.289Z --> received didChange | language: markdown | contentVersion: 1353 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:17.379Z --> received didChange | language: markdown | contentVersion: 1354 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:17.493Z --> received didChange | language: markdown | contentVersion: 1355 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:17.565Z --> received didChange | language: markdown | contentVersion: 1356 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:17.624Z --> received didChange | language: markdown | contentVersion: 1357 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:17.745Z --> received didChange | language: markdown | contentVersion: 1358 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:17.750Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":106,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":267} + +APP 2025-04-04T20:00:17.873Z --> received didChange | language: markdown | contentVersion: 1359 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:17.951Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using a)\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1359} + +APP 2025-04-04T20:00:17.952Z --> skipping because content is stale + +APP 2025-04-04T20:00:17.952Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:17.952Z --> sent request | {"jsonrpc":"2.0","id":267,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:17.963Z --> received didChange | language: markdown | contentVersion: 1360 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:18.142Z --> received didChange | language: markdown | contentVersion: 1361 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:18.182Z --> received didChange | language: markdown | contentVersion: 1362 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:18.188Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":110,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":268} + +APP 2025-04-04T20:00:18.376Z --> received didChange | language: markdown | contentVersion: 1363 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:18.390Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4)\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1363} + +APP 2025-04-04T20:00:18.391Z --> skipping because content is stale + +APP 2025-04-04T20:00:18.391Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:18.391Z --> sent request | {"jsonrpc":"2.0","id":268,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:18.546Z --> received didChange | language: markdown | contentVersion: 1364 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:18.551Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":112,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":269} + +APP 2025-04-04T20:00:18.724Z --> received didChange | language: markdown | contentVersion: 1365 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:18.753Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 C)\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1365} + +APP 2025-04-04T20:00:18.753Z --> skipping because content is stale + +APP 2025-04-04T20:00:18.753Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:18.753Z --> sent request | {"jsonrpc":"2.0","id":269,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:18.838Z --> received didChange | language: markdown | contentVersion: 1366 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:19.045Z --> received didChange | language: markdown | contentVersion: 1367 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:19.236Z --> received didChange | language: markdown | contentVersion: 1368 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:19.487Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":116,"line":425},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":270} + +APP 2025-04-04T20:00:19.689Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1368} + +APP 2025-04-04T20:00:19.689Z --> calling completion event + +APP 2025-04-04T20:00:19.689Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}}}] + +APP 2025-04-04T20:00:19.689Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":425,"character":0},"end":{"line":426,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:19.689Z --> copilot | completion request + +APP 2025-04-04T20:00:19.690Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:00:20.552Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:00:20.553Z --> completion hints: | of the host). The benchmark was running for about 30 seconds, and the CPU usage was stable at `399%` during that time. This is a good sign that Bhyve is efficiently using the available CPU resources. + +APP 2025-04-04T20:00:20.553Z --> sent request | {"jsonrpc":"2.0","id":270,"result":{"isIncomplete":false,"items":[{"label":"of the host). The benchmark was running for about 30 seconds, and the CPU usage was stable at `399%` during that time. This is a good sign that Bhyve is efficiently using the available CPU resources.","kind":1,"preselect":true,"detail":"of the host). The benchmark was running for about 30 seconds, and the CPU usage was stable at `399%` during that time. This is a good sign that Bhyve is efficiently using the available CPU resources.","insertText":"of the host). The benchmark was running for about 30 seconds, and the CPU usage was stable at `399%` during that time. This is a good sign that Bhyve is efficiently using the available CPU resources.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":425,"character":315},"end":{"line":425,"character":316}}}]}]}} + +APP 2025-04-04T20:00:20.553Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:20.658Z --> received didChange | language: markdown | contentVersion: 1369 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:20.820Z --> received didChange | language: markdown | contentVersion: 1370 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:21.088Z --> received didChange | language: markdown | contentVersion: 1371 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:21.584Z --> received didChange | language: markdown | contentVersion: 1372 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:21.990Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:22.139Z --> received didChange | language: markdown | contentVersion: 1373 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:22.245Z --> received didChange | language: markdown | contentVersion: 1374 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:22.496Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":5,"line":427},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":271} + +APP 2025-04-04T20:00:22.592Z --> received didChange | language: markdown | contentVersion: 1375 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:22.698Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```sh\n`\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1375} + +APP 2025-04-04T20:00:22.698Z --> skipping because content is stale + +APP 2025-04-04T20:00:22.698Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:22.698Z --> sent request | {"jsonrpc":"2.0","id":271,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:23.867Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:24.209Z --> received didChange | language: markdown | contentVersion: 1376 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:25.223Z --> received didChange | language: markdown | contentVersion: 1377 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:25.826Z --> received didChange | language: markdown | contentVersion: 1378 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:26.233Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:26.246Z --> received didChange | language: markdown | contentVersion: 1379 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:26.621Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:26.621Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:26.717Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:27.149Z --> received didChange | language: markdown | contentVersion: 1380 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:28.179Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":5,"line":427},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":272} + +APP 2025-04-04T20:00:28.212Z --> received didChange | language: markdown | contentVersion: 1381 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:28.379Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```sh\n\ni\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1381} + +APP 2025-04-04T20:00:28.379Z --> skipping because content is stale + +APP 2025-04-04T20:00:28.379Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:28.379Z --> sent request | {"jsonrpc":"2.0","id":272,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:29.690Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:32.532Z --> received didChange | language: markdown | contentVersion: 1382 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:33.916Z --> received didChange | language: markdown | contentVersion: 1383 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:36.831Z --> received didChange | language: markdown | contentVersion: 1384 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:36.997Z --> received didChange | language: markdown | contentVersion: 1385 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:46.870Z --> received didChange | language: markdown | contentVersion: 1386 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:47.227Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":8,"line":420},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":273} + +APP 2025-04-04T20:00:47.291Z --> received didChange | language: markdown | contentVersion: 1387 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:47.429Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1387} + +APP 2025-04-04T20:00:47.429Z --> skipping because content is stale + +APP 2025-04-04T20:00:47.429Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:47.429Z --> sent request | {"jsonrpc":"2.0","id":273,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:57.580Z --> received didChange | language: markdown | contentVersion: 1388 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:57.756Z --> received didChange | language: markdown | contentVersion: 1389 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:57.869Z --> received didChange | language: markdown | contentVersion: 1390 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:57.970Z --> received didChange | language: markdown | contentVersion: 1391 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:58.066Z --> received didChange | language: markdown | contentVersion: 1392 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:58.072Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":3,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":274} + +APP 2025-04-04T20:00:58.207Z --> received didChange | language: markdown | contentVersion: 1393 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:58.273Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo o\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1393} + +APP 2025-04-04T20:00:58.273Z --> skipping because content is stale + +APP 2025-04-04T20:00:58.273Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:00:58.273Z --> sent request | {"jsonrpc":"2.0","id":274,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:00:58.288Z --> received didChange | language: markdown | contentVersion: 1394 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:58.405Z --> received didChange | language: markdown | contentVersion: 1395 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:58.491Z --> received didChange | language: markdown | contentVersion: 1396 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:58.664Z --> received didChange | language: markdown | contentVersion: 1397 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:58.728Z --> received didChange | language: markdown | contentVersion: 1398 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:58.886Z --> received didChange | language: markdown | contentVersion: 1399 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:59.016Z --> received didChange | language: markdown | contentVersion: 1400 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:59.094Z --> received didChange | language: markdown | contentVersion: 1401 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:00:59.100Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":12,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":275} + +APP 2025-04-04T20:00:59.301Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, \n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1401} + +APP 2025-04-04T20:00:59.301Z --> calling completion event + +APP 2025-04-04T20:00:59.301Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":432,"character":0},"end":{"line":433,"character":0}}}] + +APP 2025-04-04T20:00:59.301Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":432,"character":0},"end":{"line":433,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:00:59.301Z --> copilot | completion request + +APP 2025-04-04T20:00:59.302Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:01:00.371Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:01:00.372Z --> completion hints: | the Bhyve VM is slightly slower than the host system, but not by much. The memory performance is a bit worse, but I guess this is due to the fact that the VM is running on a ZFS filesystem. + +APP 2025-04-04T20:01:00.372Z --> sent request | {"jsonrpc":"2.0","id":275,"result":{"isIncomplete":false,"items":[{"label":"the Bhyve VM is slightly slower than the host system, but not by much. The memory performance is a bit worse, but I guess this is due to the fact that the VM is running on a ZFS filesystem.","kind":1,"preselect":true,"detail":"the Bhyve VM is slightly slower than the host system, but not by much. The memory performance is a bit worse, but I guess this is due to the fact that the VM is running on a ZFS filesystem.","insertText":"the Bhyve VM is slightly slower than the host system, but not by much. The memory performance is a bit worse, but I guess this is due to the fact that the VM is running on a ZFS filesystem.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":432,"character":201},"end":{"line":432,"character":201}}}]}]}} + +APP 2025-04-04T20:01:00.372Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:00.882Z --> received didChange | language: markdown | contentVersion: 1402 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:00.985Z --> received didChange | language: markdown | contentVersion: 1403 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:01.031Z --> received didChange | language: markdown | contentVersion: 1404 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:01.125Z --> received didChange | language: markdown | contentVersion: 1405 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:01.184Z --> received didChange | language: markdown | contentVersion: 1406 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:01.308Z --> received didChange | language: markdown | contentVersion: 1407 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:01.314Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":18,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":276} + +APP 2025-04-04T20:01:01.352Z --> received didChange | language: markdown | contentVersion: 1408 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:01.465Z --> received didChange | language: markdown | contentVersion: 1409 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:01.515Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1409} + +APP 2025-04-04T20:01:01.515Z --> skipping because content is stale + +APP 2025-04-04T20:01:01.515Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:01.516Z --> sent request | {"jsonrpc":"2.0","id":276,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:01:01.586Z --> received didChange | language: markdown | contentVersion: 1410 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:01.591Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":21,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":277} + +APP 2025-04-04T20:01:01.792Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is \n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1410} + +APP 2025-04-04T20:01:01.793Z --> calling completion event + +APP 2025-04-04T20:01:01.793Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":432,"character":0},"end":{"line":433,"character":0}}}] + +APP 2025-04-04T20:01:01.793Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":432,"character":0},"end":{"line":433,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:01:01.793Z --> copilot | completion request + +APP 2025-04-04T20:01:01.793Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:01:02.229Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:01:02.230Z --> completion hints: | a bit of overhead, but not much. The FreeBSD host is still faster than the Rocky Linux VM, but the Bhyve VM is still faster than the FreeBSD VM. + +APP 2025-04-04T20:01:02.231Z --> sent request | {"jsonrpc":"2.0","id":277,"result":{"isIncomplete":false,"items":[{"label":"a bit of overhead, but not much. The FreeBSD host is still faster than the Rocky Linux VM, but the Bhyve VM is still faster than the FreeBSD VM.","kind":1,"preselect":true,"detail":"a bit of overhead, but not much. The FreeBSD host is still faster than the Rocky Linux VM, but the Bhyve VM is still faster than the FreeBSD VM.","insertText":"a bit of overhead, but not much. The FreeBSD host is still faster than the Rocky Linux VM, but the Bhyve VM is still faster than the FreeBSD VM.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":432,"character":165},"end":{"line":432,"character":165}}}]}]}} + +APP 2025-04-04T20:01:02.231Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:02.720Z --> received didChange | language: markdown | contentVersion: 1411 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:02.879Z --> received didChange | language: markdown | contentVersion: 1412 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:02.884Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":23,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":278} + +APP 2025-04-04T20:01:02.990Z --> received didChange | language: markdown | contentVersion: 1413 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:03.086Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a s\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1413} + +APP 2025-04-04T20:01:03.086Z --> skipping because content is stale + +APP 2025-04-04T20:01:03.086Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:03.086Z --> sent request | {"jsonrpc":"2.0","id":278,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:01:03.137Z --> received didChange | language: markdown | contentVersion: 1414 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:03.388Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":25,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":279} + +APP 2025-04-04T20:01:03.589Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a sm\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1414} + +APP 2025-04-04T20:01:03.589Z --> calling completion event + +APP 2025-04-04T20:01:03.589Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":432,"character":0},"end":{"line":433,"character":0}}}] + +APP 2025-04-04T20:01:03.590Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":432,"character":0},"end":{"line":433,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:01:03.590Z --> copilot | completion request + +APP 2025-04-04T20:01:03.590Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:01:04.072Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:01:04.073Z --> completion hints: | all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small. + +APP 2025-04-04T20:01:04.073Z --> sent request | {"jsonrpc":"2.0","id":279,"result":{"isIncomplete":false,"items":[{"label":"all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small.","kind":1,"preselect":true,"detail":"all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small.","insertText":"all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":432,"character":172},"end":{"line":432,"character":172}}}]}]}} + +APP 2025-04-04T20:01:04.073Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:07.593Z --> received didChange | language: markdown | contentVersion: 1417 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:09.302Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:11.793Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:13.591Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:33.879Z --> received didChange | language: markdown | contentVersion: 1418 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:33.885Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":170,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":280} + +APP 2025-04-04T20:01:34.007Z --> received didChange | language: markdown | contentVersion: 1419 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:34.086Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small e.\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1419} + +APP 2025-04-04T20:01:34.086Z --> skipping because content is stale + +APP 2025-04-04T20:01:34.086Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:34.086Z --> sent request | {"jsonrpc":"2.0","id":280,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:01:34.095Z --> received didChange | language: markdown | contentVersion: 1420 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:34.305Z --> received didChange | language: markdown | contentVersion: 1421 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:34.380Z --> received didChange | language: markdown | contentVersion: 1422 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:34.573Z --> received didChange | language: markdown | contentVersion: 1423 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:34.642Z --> received didChange | language: markdown | contentVersion: 1424 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:34.705Z --> received didChange | language: markdown | contentVersion: 1425 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:34.712Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":177,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":281} + +APP 2025-04-04T20:01:34.803Z --> received didChange | language: markdown | contentVersion: 1426 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:34.913Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough f.\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1426} + +APP 2025-04-04T20:01:34.914Z --> skipping because content is stale + +APP 2025-04-04T20:01:34.914Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:34.914Z --> sent request | {"jsonrpc":"2.0","id":281,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:01:34.950Z --> received didChange | language: markdown | contentVersion: 1427 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:35.036Z --> received didChange | language: markdown | contentVersion: 1428 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:35.124Z --> received didChange | language: markdown | contentVersion: 1429 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:35.130Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":181,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":282} + +APP 2025-04-04T20:01:35.203Z --> received didChange | language: markdown | contentVersion: 1430 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:35.330Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for m.\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1430} + +APP 2025-04-04T20:01:35.330Z --> skipping because content is stale + +APP 2025-04-04T20:01:35.330Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:35.330Z --> sent request | {"jsonrpc":"2.0","id":282,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:01:35.465Z --> received didChange | language: markdown | contentVersion: 1431 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:35.495Z --> received didChange | language: markdown | contentVersion: 1432 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:35.502Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":184,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":283} + +APP 2025-04-04T20:01:35.703Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my .\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1432} + +APP 2025-04-04T20:01:35.704Z --> calling completion event + +APP 2025-04-04T20:01:35.704Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":432,"character":0},"end":{"line":433,"character":0}}}] + +APP 2025-04-04T20:01:35.704Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":432,"character":0},"end":{"line":433,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:01:35.704Z --> copilot | completion request + +APP 2025-04-04T20:01:35.705Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:01:35.729Z --> received didChange | language: markdown | contentVersion: 1433 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:35.796Z --> received didChange | language: markdown | contentVersion: 1434 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:35.974Z --> received didChange | language: markdown | contentVersion: 1435 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:36.022Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:01:36.022Z --> completion hints: | use case. + +APP 2025-04-04T20:01:36.022Z --> sent request | {"jsonrpc":"2.0","id":283,"result":{"isIncomplete":false,"items":[{"label":"use case.","kind":1,"preselect":true,"detail":"use case.","insertText":"use case.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":432,"character":193},"end":{"line":432,"character":194}}}]}]}} + +APP 2025-04-04T20:01:36.022Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:36.156Z --> received didChange | language: markdown | contentVersion: 1436 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:36.162Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":188,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":284} + +APP 2025-04-04T20:01:36.258Z --> received didChange | language: markdown | contentVersion: 1437 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:36.363Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use c.\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1437} + +APP 2025-04-04T20:01:36.363Z --> skipping because content is stale + +APP 2025-04-04T20:01:36.363Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:36.363Z --> sent request | {"jsonrpc":"2.0","id":284,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:01:36.436Z --> received didChange | language: markdown | contentVersion: 1438 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:36.529Z --> received didChange | language: markdown | contentVersion: 1439 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:36.688Z --> received didChange | language: markdown | contentVersion: 1440 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:36.940Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":192,"line":432},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":285} + +APP 2025-04-04T20:01:37.141Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use case.\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1440} + +APP 2025-04-04T20:01:37.141Z --> calling completion event + +APP 2025-04-04T20:01:37.142Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":432,"character":0},"end":{"line":433,"character":0}}}] + +APP 2025-04-04T20:01:37.142Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":432,"character":0},"end":{"line":433,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:01:37.142Z --> copilot | completion request + +APP 2025-04-04T20:01:37.142Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:01:37.455Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:01:37.456Z --> completion hints: | s. + +APP 2025-04-04T20:01:37.456Z --> sent request | {"jsonrpc":"2.0","id":285,"result":{"isIncomplete":false,"items":[{"label":"s.","kind":1,"preselect":true,"detail":"s.","insertText":"s.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":432,"character":194},"end":{"line":432,"character":195}}}]}]}} + +APP 2025-04-04T20:01:37.456Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:38.622Z --> received didChange | language: markdown | contentVersion: 1441 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:01:45.705Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:01:47.143Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:02:44.166Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":18,"line":434},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":286} + +APP 2025-04-04T20:02:44.367Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux 9 @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1441} + +APP 2025-04-04T20:02:44.368Z --> calling completion event + +APP 2025-04-04T20:02:44.368Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":434,"character":0},"end":{"line":435,"character":0}}}] + +APP 2025-04-04T20:02:44.368Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":434,"character":0},"end":{"line":435,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:02:44.368Z --> copilot | completion request + +APP 2025-04-04T20:02:44.369Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:02:44.834Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:02:44.835Z --> completion hints: | @ Bhyve `ubench` benchmark + +APP 2025-04-04T20:02:44.835Z --> sent request | {"jsonrpc":"2.0","id":286,"result":{"isIncomplete":false,"items":[{"label":"@ Bhyve `ubench` benchmark","kind":1,"preselect":true,"detail":"@ Bhyve `ubench` benchmark","insertText":"@ Bhyve `ubench` benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":434,"character":44},"end":{"line":434,"character":70}}}]}]}} + +APP 2025-04-04T20:02:44.835Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:02:45.654Z --> received didChange | language: markdown | contentVersion: 1442 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:02:45.764Z --> received didChange | language: markdown | contentVersion: 1443 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:02:45.834Z --> received didChange | language: markdown | contentVersion: 1444 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:02:45.840Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":21,"line":434},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":287} + +APP 2025-04-04T20:02:46.041Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux 9 VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1444} + +APP 2025-04-04T20:02:46.041Z --> calling completion event + +APP 2025-04-04T20:02:46.042Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":434,"character":0},"end":{"line":435,"character":0}}}] + +APP 2025-04-04T20:02:46.042Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":434,"character":0},"end":{"line":435,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:02:46.042Z --> copilot | completion request + +APP 2025-04-04T20:02:46.042Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:02:46.371Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:02:46.372Z --> completion hints: | `ubench` benchmark + +APP 2025-04-04T20:02:46.372Z --> sent request | {"jsonrpc":"2.0","id":287,"result":{"isIncomplete":false,"items":[{"label":"`ubench` benchmark","kind":1,"preselect":true,"detail":"`ubench` benchmark","insertText":"`ubench` benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":434,"character":39},"end":{"line":434,"character":65}}}]}]}} + +APP 2025-04-04T20:02:46.372Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:02:50.555Z --> received didChange | language: markdown | contentVersion: 1445 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:02:50.717Z --> received didChange | language: markdown | contentVersion: 1446 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:02:54.056Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":12,"line":403},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":288} + +APP 2025-04-04T20:02:54.258Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1446} + +APP 2025-04-04T20:02:54.258Z --> calling completion event + +APP 2025-04-04T20:02:54.258Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":403,"character":0},"end":{"line":404,"character":0}}}] + +APP 2025-04-04T20:02:54.258Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":403,"character":0},"end":{"line":404,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:02:54.258Z --> copilot | completion request + +APP 2025-04-04T20:02:54.259Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:02:54.369Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:02:54.369Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":403,"character":0},"end":{"line":404,"character":0}}}] + +APP 2025-04-04T20:02:54.369Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":403,"character":0},"end":{"line":404,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:02:54.967Z --> received didChange | language: markdown | contentVersion: 1447 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:02:55.067Z --> received didChange | language: markdown | contentVersion: 1448 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:02:55.160Z --> received didChange | language: markdown | contentVersion: 1449 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:02:55.167Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":15,"line":403},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":289} + +APP 2025-04-04T20:02:55.368Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1449} + +APP 2025-04-04T20:02:55.369Z --> calling completion event + +APP 2025-04-04T20:02:55.369Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":403,"character":0},"end":{"line":404,"character":0}}}] + +APP 2025-04-04T20:02:55.369Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":403,"character":0},"end":{"line":404,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:02:55.369Z --> copilot | completion request + +APP 2025-04-04T20:02:55.369Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:02:55.893Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:02:55.893Z --> completion hints: | @ Bhyve `ubench` benchmark + +APP 2025-04-04T20:02:55.893Z --> sent request | {"jsonrpc":"2.0","id":289,"result":{"isIncomplete":false,"items":[{"label":"@ Bhyve `ubench` benchmark","kind":1,"preselect":true,"detail":"@ Bhyve `ubench` benchmark","insertText":"@ Bhyve `ubench` benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":403,"character":41},"end":{"line":403,"character":67}}}]}]}} + +APP 2025-04-04T20:02:55.893Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:02:56.042Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:04.258Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:04.370Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:04.552Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":12,"line":346},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":290} + +APP 2025-04-04T20:03:04.755Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1449} + +APP 2025-04-04T20:03:04.755Z --> calling completion event + +APP 2025-04-04T20:03:04.755Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":346,"character":0},"end":{"line":347,"character":0}}}] + +APP 2025-04-04T20:03:04.755Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":346,"character":0},"end":{"line":347,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:03:04.755Z --> copilot | completion request + +APP 2025-04-04T20:03:04.755Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:03:04.787Z --> received didChange | language: markdown | contentVersion: 1450 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:04.910Z --> received didChange | language: markdown | contentVersion: 1451 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:05.011Z --> received didChange | language: markdown | contentVersion: 1452 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:05.016Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":15,"line":346},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":291} + +APP 2025-04-04T20:03:05.213Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:03:05.214Z --> completion hints: | Bhyve VM benchmark + +APP 2025-04-04T20:03:05.214Z --> sent request | {"jsonrpc":"2.0","id":290,"result":{"isIncomplete":false,"items":[{"label":"Bhyve VM benchmark","kind":1,"preselect":true,"detail":"Bhyve VM benchmark","insertText":"Bhyve VM benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":346,"character":30},"end":{"line":346,"character":47}}}]}]}} + +APP 2025-04-04T20:03:05.214Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:05.217Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1452} + +APP 2025-04-04T20:03:05.217Z --> calling completion event + +APP 2025-04-04T20:03:05.217Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":346,"character":0},"end":{"line":347,"character":0}}}] + +APP 2025-04-04T20:03:05.217Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":346,"character":0},"end":{"line":347,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:03:05.217Z --> copilot | completion request + +APP 2025-04-04T20:03:05.218Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:03:05.369Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:05.370Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":346,"character":0},"end":{"line":347,"character":0}}}] + +APP 2025-04-04T20:03:05.370Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":346,"character":0},"end":{"line":347,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:03:14.756Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:14.786Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":12,"line":292},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":292} + +APP 2025-04-04T20:03:14.988Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1452} + +APP 2025-04-04T20:03:14.988Z --> calling completion event + +APP 2025-04-04T20:03:14.988Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":292,"character":0},"end":{"line":293,"character":0}}}] + +APP 2025-04-04T20:03:14.989Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":292,"character":0},"end":{"line":293,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:03:14.989Z --> copilot | completion request + +APP 2025-04-04T20:03:14.989Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:03:15.218Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:15.218Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":292,"character":0},"end":{"line":293,"character":0}}}] + +APP 2025-04-04T20:03:15.219Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":292,"character":0},"end":{"line":293,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:03:15.371Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:15.714Z --> received didChange | language: markdown | contentVersion: 1453 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:15.943Z --> received didChange | language: markdown | contentVersion: 1454 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:16.054Z --> received didChange | language: markdown | contentVersion: 1455 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:16.309Z --> received didChange | language: markdown | contentVersion: 1456 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:16.467Z --> received didChange | language: markdown | contentVersion: 1457 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:16.472Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":17,"line":292},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":293} + +APP 2025-04-04T20:03:16.675Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1457} + +APP 2025-04-04T20:03:16.675Z --> calling completion event + +APP 2025-04-04T20:03:16.675Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":292,"character":0},"end":{"line":293,"character":0}}}] + +APP 2025-04-04T20:03:16.675Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":292,"character":0},"end":{"line":293,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:03:16.675Z --> copilot | completion request + +APP 2025-04-04T20:03:16.676Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:03:17.192Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:03:17.193Z --> completion hints: | benchmark + +APP 2025-04-04T20:03:17.193Z --> sent request | {"jsonrpc":"2.0","id":293,"result":{"isIncomplete":false,"items":[{"label":"benchmark","kind":1,"preselect":true,"detail":"benchmark","insertText":"benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":292,"character":26},"end":{"line":292,"character":35}}}]}]}} + +APP 2025-04-04T20:03:17.193Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:22.425Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":292},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":294} + +APP 2025-04-04T20:03:22.577Z --> received didChange | language: markdown | contentVersion: 1458 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:22.627Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### SFreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1458} + +APP 2025-04-04T20:03:22.627Z --> skipping because content is stale + +APP 2025-04-04T20:03:22.627Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:22.627Z --> sent request | {"jsonrpc":"2.0","id":294,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:03:22.686Z --> received didChange | language: markdown | contentVersion: 1459 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:22.909Z --> received didChange | language: markdown | contentVersion: 1460 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:23.060Z --> received didChange | language: markdown | contentVersion: 1461 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:23.259Z --> received didChange | language: markdown | contentVersion: 1462 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:23.304Z --> received didChange | language: markdown | contentVersion: 1463 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:23.310Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":10,"line":292},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":295} + +APP 2025-04-04T20:03:23.512Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1463} + +APP 2025-04-04T20:03:23.512Z --> calling completion event + +APP 2025-04-04T20:03:23.513Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":292,"character":0},"end":{"line":293,"character":0}}}] + +APP 2025-04-04T20:03:23.513Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":292,"character":0},"end":{"line":293,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:03:23.513Z --> copilot | completion request + +APP 2025-04-04T20:03:23.513Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:03:23.849Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:03:23.850Z --> completion hints: | benchmark on FreeBSD host + +APP 2025-04-04T20:03:23.850Z --> sent request | {"jsonrpc":"2.0","id":295,"result":{"isIncomplete":false,"items":[{"label":"benchmark on FreeBSD host","kind":1,"preselect":true,"detail":"benchmark on FreeBSD host","insertText":"benchmark on FreeBSD host","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":292,"character":35},"end":{"line":292,"character":57}}}]}]}} + +APP 2025-04-04T20:03:23.850Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:24.989Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:25.218Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:26.676Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:28.816Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":320},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":296} + +APP 2025-04-04T20:03:28.988Z --> received didChange | language: markdown | contentVersion: 1464 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:29.017Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### SRocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1464} + +APP 2025-04-04T20:03:29.017Z --> skipping because content is stale + +APP 2025-04-04T20:03:29.018Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:29.018Z --> sent request | {"jsonrpc":"2.0","id":296,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:03:29.095Z --> received didChange | language: markdown | contentVersion: 1465 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:29.277Z --> received didChange | language: markdown | contentVersion: 1466 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:29.434Z --> received didChange | language: markdown | contentVersion: 1467 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:29.696Z --> received didChange | language: markdown | contentVersion: 1468 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:29.757Z --> received didChange | language: markdown | contentVersion: 1469 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:29.763Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":10,"line":320},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":297} + +APP 2025-04-04T20:03:29.965Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1469} + +APP 2025-04-04T20:03:29.965Z --> calling completion event + +APP 2025-04-04T20:03:29.965Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":320,"character":0},"end":{"line":321,"character":0}}}] + +APP 2025-04-04T20:03:29.965Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":320,"character":0},"end":{"line":321,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:03:29.965Z --> copilot | completion request + +APP 2025-04-04T20:03:29.966Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:03:30.345Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:03:30.346Z --> completion hints: | Rocky Linux host benchmark + +APP 2025-04-04T20:03:30.346Z --> sent request | {"jsonrpc":"2.0","id":297,"result":{"isIncomplete":false,"items":[{"label":"Rocky Linux host benchmark","kind":1,"preselect":true,"detail":"Rocky Linux host benchmark","insertText":"Rocky Linux host benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":320,"character":36},"end":{"line":320,"character":68}}}]}]}} + +APP 2025-04-04T20:03:30.346Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:33.514Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:34.922Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":346},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":298} + +APP 2025-04-04T20:03:35.123Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1469} + +APP 2025-04-04T20:03:35.123Z --> calling completion event + +APP 2025-04-04T20:03:35.123Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":346,"character":0},"end":{"line":347,"character":0}}}] + +APP 2025-04-04T20:03:35.123Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":346,"character":0},"end":{"line":347,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:03:35.123Z --> copilot | completion request + +APP 2025-04-04T20:03:35.123Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:03:35.238Z --> received didChange | language: markdown | contentVersion: 1470 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:35.343Z --> received didChange | language: markdown | contentVersion: 1471 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:35.551Z --> received didChange | language: markdown | contentVersion: 1472 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:35.696Z --> received didChange | language: markdown | contentVersion: 1473 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:35.727Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:03:35.728Z --> completion hints: | Silly FreeBSD VM @ Bhyve benchmark + +APP 2025-04-04T20:03:35.728Z --> sent request | {"jsonrpc":"2.0","id":298,"result":{"isIncomplete":false,"items":[{"label":"Silly FreeBSD VM @ Bhyve benchmark","kind":1,"preselect":true,"detail":"Silly FreeBSD VM @ Bhyve benchmark","insertText":"Silly FreeBSD VM @ Bhyve benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":346,"character":38},"end":{"line":346,"character":66}}}]}]}} + +APP 2025-04-04T20:03:35.728Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:35.890Z --> received didChange | language: markdown | contentVersion: 1474 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:35.945Z --> received didChange | language: markdown | contentVersion: 1475 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:03:35.952Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":10,"line":346},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":299} + +APP 2025-04-04T20:03:36.153Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\n```\n\nDuring the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1475} + +APP 2025-04-04T20:03:36.154Z --> calling completion event + +APP 2025-04-04T20:03:36.154Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":346,"character":0},"end":{"line":347,"character":0}}}] + +APP 2025-04-04T20:03:36.154Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":346,"character":0},"end":{"line":347,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:03:36.154Z --> copilot | completion request + +APP 2025-04-04T20:03:36.154Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:03:36.444Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:03:36.445Z --> completion hints: | FreeBSD VM @ Bhyve benchmark + +APP 2025-04-04T20:03:36.445Z --> sent request | {"jsonrpc":"2.0","id":299,"result":{"isIncomplete":false,"items":[{"label":"FreeBSD VM @ Bhyve benchmark","kind":1,"preselect":true,"detail":"FreeBSD VM @ Bhyve benchmark","insertText":"FreeBSD VM @ Bhyve benchmark","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":346,"character":38},"end":{"line":346,"character":66}}}]}]}} + +APP 2025-04-04T20:03:36.445Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:39.967Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:45.124Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:03:46.155Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:13.538Z --> received didChange | language: markdown | contentVersion: 1476 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:14.521Z --> received didChange | language: markdown | contentVersion: 1477 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:15.536Z --> received didChange | language: markdown | contentVersion: 1478 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:17.053Z --> received didChange | language: markdown | contentVersion: 1479 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:21.763Z --> received didChange | language: markdown | contentVersion: 1480 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:21.906Z --> received didChange | language: markdown | contentVersion: 1481 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:22.007Z --> received didChange | language: markdown | contentVersion: 1482 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:22.071Z --> received didChange | language: markdown | contentVersion: 1483 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:22.295Z --> received didChange | language: markdown | contentVersion: 1484 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:22.322Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":5,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":300} + +APP 2025-04-04T20:04:22.358Z --> received didChange | language: markdown | contentVersion: 1485 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:22.364Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":6,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":301} + +APP 2025-04-04T20:04:22.565Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nAlso, During the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1485} + +APP 2025-04-04T20:04:22.566Z --> calling completion event + +APP 2025-04-04T20:04:22.566Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:04:22.566Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:04:22.566Z --> copilot | completion request + +APP 2025-04-04T20:04:22.567Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:04:22.947Z --> received didChange | language: markdown | contentVersion: 1486 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:23.140Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:04:23.141Z --> completion hints: | the VM was killed because it ran out of memory. I had to increase the RAM of the VM to 14GB to run the benchmark successfully. + +APP 2025-04-04T20:04:23.141Z --> sent request | {"jsonrpc":"2.0","id":301,"result":{"isIncomplete":false,"items":[{"label":"the VM was killed because it ran out of memory. I had to increase the RAM of the VM to 14GB to run the benchmark successfully.","kind":1,"preselect":true,"detail":"the VM was killed because it ran out of memory. I had to increase the RAM of the VM to 14GB to run the benchmark successfully.","insertText":"the VM was killed because it ran out of memory. I had to increase the RAM of the VM to 14GB to run the benchmark successfully.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":132},"end":{"line":443,"character":249}}}]}]}} + +APP 2025-04-04T20:04:23.141Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:28.569Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":27,"line":417},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":302} + +APP 2025-04-04T20:04:28.771Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1486} + +APP 2025-04-04T20:04:28.771Z --> calling completion event + +APP 2025-04-04T20:04:28.771Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":417,"character":0},"end":{"line":418,"character":0}}}] + +APP 2025-04-04T20:04:28.772Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":417,"character":0},"end":{"line":418,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:04:28.772Z --> copilot | completion request + +APP 2025-04-04T20:04:28.772Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:04:29.143Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:04:29.144Z --> completion hints: | +``` + +APP 2025-04-04T20:04:29.145Z --> sent request | {"jsonrpc":"2.0","id":302,"result":{"isIncomplete":false,"items":[{"label":"```","kind":1,"preselect":true,"detail":"```","insertText":"```","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":417,"character":30},"end":{"line":417,"character":30}}}]}]}} + +APP 2025-04-04T20:04:29.145Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:32.567Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:38.773Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:43.779Z --> received didChange | language: markdown | contentVersion: 1487 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:43.936Z --> received didChange | language: markdown | contentVersion: 1488 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:44.715Z --> received didChange | language: markdown | contentVersion: 1489 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:44.813Z --> received didChange | language: markdown | contentVersion: 1490 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:44.891Z --> received didChange | language: markdown | contentVersion: 1491 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:44.896Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":3,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":303} + +APP 2025-04-04T20:04:45.097Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nSo \n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1491} + +APP 2025-04-04T20:04:45.098Z --> calling completion event + +APP 2025-04-04T20:04:45.098Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:04:45.098Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:04:45.098Z --> copilot | completion request + +APP 2025-04-04T20:04:45.098Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:04:45.166Z --> received didChange | language: markdown | contentVersion: 1492 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:45.325Z --> received didChange | language: markdown | contentVersion: 1493 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:45.404Z --> received didChange | language: markdown | contentVersion: 1494 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:45.662Z --> received didChange | language: markdown | contentVersion: 1495 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:45.668Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":7,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":304} + +APP 2025-04-04T20:04:45.870Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nSo the \n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1495} + +APP 2025-04-04T20:04:45.871Z --> calling completion event + +APP 2025-04-04T20:04:45.871Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:04:45.871Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:04:45.871Z --> copilot | completion request + +APP 2025-04-04T20:04:45.871Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:04:45.872Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:04:45.872Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:04:46.693Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:04:46.694Z --> completion hints: | FreeBSD VM running on Bhyve is slightly slower than the FreeBSD host. The memory benchmark was killed because the VM was out of memory. I had to increase the RAM of the VM to 14GB to run the full benchmark. + +APP 2025-04-04T20:04:46.694Z --> sent request | {"jsonrpc":"2.0","id":304,"result":{"isIncomplete":false,"items":[{"label":"FreeBSD VM running on Bhyve is slightly slower than the FreeBSD host. The memory benchmark was killed because the VM was out of memory. I had to increase the RAM of the VM to 14GB to run the full benchmark.","kind":1,"preselect":true,"detail":"FreeBSD VM running on Bhyve is slightly slower than the FreeBSD host. The memory benchmark was killed because the VM was out of memory. I had to increase the RAM of the VM to 14GB to run the full benchmark.","insertText":"FreeBSD VM running on Bhyve is slightly slower than the FreeBSD host. The memory benchmark was killed because the VM was out of memory. I had to increase the RAM of the VM to 14GB to run the full benchmark.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":213},"end":{"line":443,"character":213}}}]}]}} + +APP 2025-04-04T20:04:46.694Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:50.460Z --> received didChange | language: markdown | contentVersion: 1496 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:50.577Z --> received didChange | language: markdown | contentVersion: 1497 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:50.723Z --> received didChange | language: markdown | contentVersion: 1498 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:52.014Z --> received didChange | language: markdown | contentVersion: 1499 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:52.129Z --> received didChange | language: markdown | contentVersion: 1500 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:52.359Z --> received didChange | language: markdown | contentVersion: 1501 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:52.412Z --> received didChange | language: markdown | contentVersion: 1502 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:52.578Z --> received didChange | language: markdown | contentVersion: 1503 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:52.623Z --> received didChange | language: markdown | contentVersion: 1504 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:52.694Z --> received didChange | language: markdown | contentVersion: 1505 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:52.699Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":17,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":305} + +APP 2025-04-04T20:04:52.900Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nSo the benchmark \n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1505} + +APP 2025-04-04T20:04:52.901Z --> calling completion event + +APP 2025-04-04T20:04:52.901Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:04:52.901Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:04:52.901Z --> copilot | completion request + +APP 2025-04-04T20:04:52.901Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:04:53.348Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:04:53.350Z --> completion hints: | failed because the VM ran out of memory. I had to increase the RAM of the VM to 14GB (the same as the host) and re-run the benchmark. + +APP 2025-04-04T20:04:53.350Z --> sent request | {"jsonrpc":"2.0","id":305,"result":{"isIncomplete":false,"items":[{"label":"failed because the VM ran out of memory. I had to increase the RAM of the VM to 14GB (the same as the host) and re-run the benchmark.","kind":1,"preselect":true,"detail":"failed because the VM ran out of memory. I had to increase the RAM of the VM to 14GB (the same as the host) and re-run the benchmark.","insertText":"failed because the VM ran out of memory. I had to increase the RAM of the VM to 14GB (the same as the host) and re-run the benchmark.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":150},"end":{"line":443,"character":150}}}]}]}} + +APP 2025-04-04T20:04:53.350Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:53.693Z --> received didChange | language: markdown | contentVersion: 1506 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:54.306Z --> received didChange | language: markdown | contentVersion: 1507 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:54.584Z --> received didChange | language: markdown | contentVersion: 1508 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.086Z --> received didChange | language: markdown | contentVersion: 1509 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.099Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:55.116Z --> received didChange | language: markdown | contentVersion: 1510 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.146Z --> received didChange | language: markdown | contentVersion: 1511 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.174Z --> received didChange | language: markdown | contentVersion: 1512 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.206Z --> received didChange | language: markdown | contentVersion: 1513 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.235Z --> received didChange | language: markdown | contentVersion: 1514 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.263Z --> received didChange | language: markdown | contentVersion: 1515 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.294Z --> received didChange | language: markdown | contentVersion: 1516 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.324Z --> received didChange | language: markdown | contentVersion: 1517 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.354Z --> received didChange | language: markdown | contentVersion: 1518 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.384Z --> received didChange | language: markdown | contentVersion: 1519 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.414Z --> received didChange | language: markdown | contentVersion: 1520 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.560Z --> received didChange | language: markdown | contentVersion: 1521 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.754Z --> received didChange | language: markdown | contentVersion: 1522 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:55.872Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:55.872Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:55.926Z --> received didChange | language: markdown | contentVersion: 1523 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:56.090Z --> received didChange | language: markdown | contentVersion: 1524 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:56.203Z --> received didChange | language: markdown | contentVersion: 1525 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:56.311Z --> received didChange | language: markdown | contentVersion: 1526 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:56.387Z --> received didChange | language: markdown | contentVersion: 1527 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:56.500Z --> received didChange | language: markdown | contentVersion: 1528 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:56.506Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":306} + +APP 2025-04-04T20:04:56.637Z --> received didChange | language: markdown | contentVersion: 1529 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:56.708Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe C\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1529} + +APP 2025-04-04T20:04:56.708Z --> skipping because content is stale + +APP 2025-04-04T20:04:56.708Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:56.708Z --> sent request | {"jsonrpc":"2.0","id":306,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:04:56.725Z --> received didChange | language: markdown | contentVersion: 1530 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:56.913Z --> received didChange | language: markdown | contentVersion: 1531 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:56.990Z --> received didChange | language: markdown | contentVersion: 1532 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:56.996Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":8,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":307} + +APP 2025-04-04T20:04:57.170Z --> received didChange | language: markdown | contentVersion: 1533 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:57.197Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU b\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1533} + +APP 2025-04-04T20:04:57.197Z --> skipping because content is stale + +APP 2025-04-04T20:04:57.197Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:57.197Z --> sent request | {"jsonrpc":"2.0","id":307,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:04:57.230Z --> received didChange | language: markdown | contentVersion: 1534 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:57.398Z --> received didChange | language: markdown | contentVersion: 1535 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:57.464Z --> received didChange | language: markdown | contentVersion: 1536 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:57.716Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":12,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":308} + +APP 2025-04-04T20:04:57.777Z --> received didChange | language: markdown | contentVersion: 1537 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:57.859Z --> received didChange | language: markdown | contentVersion: 1538 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:57.917Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU bencma\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1538} + +APP 2025-04-04T20:04:57.917Z --> skipping because content is stale + +APP 2025-04-04T20:04:57.917Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:57.917Z --> sent request | {"jsonrpc":"2.0","id":308,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:04:58.193Z --> received didChange | language: markdown | contentVersion: 1539 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:58.349Z --> received didChange | language: markdown | contentVersion: 1540 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:58.566Z --> received didChange | language: markdown | contentVersion: 1541 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:58.773Z --> received didChange | language: markdown | contentVersion: 1542 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:58.825Z --> received didChange | language: markdown | contentVersion: 1543 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:58.984Z --> received didChange | language: markdown | contentVersion: 1544 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:59.068Z --> received didChange | language: markdown | contentVersion: 1545 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:59.139Z --> received didChange | language: markdown | contentVersion: 1546 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:59.145Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":18,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":309} + +APP 2025-04-04T20:04:59.265Z --> received didChange | language: markdown | contentVersion: 1547 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:59.346Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark r\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1547} + +APP 2025-04-04T20:04:59.346Z --> skipping because content is stale + +APP 2025-04-04T20:04:59.346Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:59.346Z --> sent request | {"jsonrpc":"2.0","id":309,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:04:59.355Z --> received didChange | language: markdown | contentVersion: 1548 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:59.401Z --> received didChange | language: markdown | contentVersion: 1549 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:59.652Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":21,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":310} + +APP 2025-04-04T20:04:59.705Z --> received didChange | language: markdown | contentVersion: 1550 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:59.709Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":22,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":311} + +APP 2025-04-04T20:04:59.845Z --> received didChange | language: markdown | contentVersion: 1551 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:04:59.911Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run t\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1551} + +APP 2025-04-04T20:04:59.911Z --> skipping because content is stale + +APP 2025-04-04T20:04:59.911Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:04:59.911Z --> sent request | {"jsonrpc":"2.0","id":311,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:04:59.959Z --> received didChange | language: markdown | contentVersion: 1552 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:00.082Z --> received didChange | language: markdown | contentVersion: 1553 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:00.222Z --> received didChange | language: markdown | contentVersion: 1554 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:00.294Z --> received didChange | language: markdown | contentVersion: 1555 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:00.450Z --> received didChange | language: markdown | contentVersion: 1556 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:00.539Z --> received didChange | language: markdown | contentVersion: 1557 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:00.791Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":29,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":312} + +APP 2025-04-04T20:05:00.907Z --> received didChange | language: markdown | contentVersion: 1558 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:00.910Z --> received didChange | language: markdown | contentVersion: 1559 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:00.916Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":31,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":313} + +APP 2025-04-04T20:05:01.055Z --> received didChange | language: markdown | contentVersion: 1560 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:01.117Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run through, b\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1560} + +APP 2025-04-04T20:05:01.117Z --> skipping because content is stale + +APP 2025-04-04T20:05:01.117Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:01.117Z --> sent request | {"jsonrpc":"2.0","id":313,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:01.137Z --> received didChange | language: markdown | contentVersion: 1561 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:01.191Z --> received didChange | language: markdown | contentVersion: 1562 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:01.361Z --> received didChange | language: markdown | contentVersion: 1563 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:01.364Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":35,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":314} + +APP 2025-04-04T20:05:01.504Z --> received didChange | language: markdown | contentVersion: 1564 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:01.566Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run through, but t\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1564} + +APP 2025-04-04T20:05:01.566Z --> skipping because content is stale + +APP 2025-04-04T20:05:01.566Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:01.566Z --> sent request | {"jsonrpc":"2.0","id":314,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:01.598Z --> received didChange | language: markdown | contentVersion: 1565 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:01.690Z --> received didChange | language: markdown | contentVersion: 1566 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:01.941Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":38,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":315} + +APP 2025-04-04T20:05:02.143Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run through, but the\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1566} + +APP 2025-04-04T20:05:02.144Z --> calling completion event + +APP 2025-04-04T20:05:02.144Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:02.144Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:02.144Z --> copilot | completion request + +APP 2025-04-04T20:05:02.144Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:02.683Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:05:02.684Z --> completion hints: | memory benchmark failed because the VM was running out of memory. I had to increase the RAM of the VM to 14GB (from 12GB) to run the memory benchmark. + +APP 2025-04-04T20:05:02.684Z --> sent request | {"jsonrpc":"2.0","id":315,"result":{"isIncomplete":false,"items":[{"label":"memory benchmark failed because the VM was running out of memory. I had to increase the RAM of the VM to 14GB (from 12GB) to run the memory benchmark.","kind":1,"preselect":true,"detail":"memory benchmark failed because the VM was running out of memory. I had to increase the RAM of the VM to 14GB (from 12GB) to run the memory benchmark.","insertText":"memory benchmark failed because the VM was running out of memory. I had to increase the RAM of the VM to 14GB (from 12GB) to run the memory benchmark.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":188},"end":{"line":443,"character":188}}}]}]}} + +APP 2025-04-04T20:05:02.684Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:02.902Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:12.146Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:24.941Z --> received didChange | language: markdown | contentVersion: 1567 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:25.160Z --> failed to parse line: | failed to parse | Content-Length: 18572
+
+ + +APP 2025-04-04T20:05:25.162Z --> received didChange | language: markdown | contentVersion: 1568 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:25.166Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":30,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":316} + +APP 2025-04-04T20:05:25.368Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run through \n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1568} + +APP 2025-04-04T20:05:25.369Z --> calling completion event + +APP 2025-04-04T20:05:25.369Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:25.370Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:25.370Z --> copilot | completion request + +APP 2025-04-04T20:05:25.370Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:25.389Z --> received didChange | language: markdown | contentVersion: 1569 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:25.394Z --> received didChange | language: markdown | contentVersion: 1570 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:25.563Z --> received didChange | language: markdown | contentVersion: 1571 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:25.616Z --> received didChange | language: markdown | contentVersion: 1572 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:25.696Z --> received didChange | language: markdown | contentVersion: 1573 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:25.702Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":35,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":317} + +APP 2025-04-04T20:05:25.848Z --> received didChange | language: markdown | contentVersion: 1574 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:25.904Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run through with a\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1574} + +APP 2025-04-04T20:05:25.904Z --> skipping because content is stale + +APP 2025-04-04T20:05:25.904Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:25.904Z --> sent request | {"jsonrpc":"2.0","id":317,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:26.116Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:05:26.117Z --> completion hints: | fine, but the memory benchmark failed because the VM was running out of memory. I had to increase the RAM of the VM from 12GB to 14GB to make it work. After that, it worked fine: + +APP 2025-04-04T20:05:26.118Z --> sent request | {"jsonrpc":"2.0","id":316,"result":{"isIncomplete":false,"items":[{"label":"fine, but the memory benchmark failed because the VM was running out of memory. I had to increase the RAM of the VM from 12GB to 14GB to make it work. After that, it worked fine:","kind":1,"preselect":true,"detail":"fine, but the memory benchmark failed because the VM was running out of memory. I had to increase the RAM of the VM from 12GB to 14GB to make it work. After that, it worked fine:","insertText":"fine, but the memory benchmark failed because the VM was running out of memory. I had to increase the RAM of the VM from 12GB to 14GB to make it work. After that, it worked fine:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":208},"end":{"line":443,"character":208}}}]}]}} + +APP 2025-04-04T20:05:26.118Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:26.367Z --> received didChange | language: markdown | contentVersion: 1575 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:26.602Z --> received didChange | language: markdown | contentVersion: 1576 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:26.662Z --> received didChange | language: markdown | contentVersion: 1577 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:26.751Z --> received didChange | language: markdown | contentVersion: 1578 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:26.920Z --> received didChange | language: markdown | contentVersion: 1579 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:27.001Z --> received didChange | language: markdown | contentVersion: 1580 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:27.006Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":42,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":318} + +APP 2025-04-04T20:05:27.159Z --> received didChange | language: markdown | contentVersion: 1581 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:27.208Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run through with almost d\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1581} + +APP 2025-04-04T20:05:27.208Z --> skipping because content is stale + +APP 2025-04-04T20:05:27.208Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:27.208Z --> sent request | {"jsonrpc":"2.0","id":318,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:27.232Z --> received didChange | language: markdown | contentVersion: 1582 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:27.484Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":44,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":319} + +APP 2025-04-04T20:05:27.685Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run through with almost de\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1582} + +APP 2025-04-04T20:05:27.685Z --> calling completion event + +APP 2025-04-04T20:05:27.685Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:27.686Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:27.686Z --> copilot | completion request + +APP 2025-04-04T20:05:27.686Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:27.997Z --> received didChange | language: markdown | contentVersion: 1583 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:28.195Z --> received didChange | language: markdown | contentVersion: 1584 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:28.368Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:05:28.369Z --> completion hints: | fault values, but the memory benchmark failed. I guess this is because the VM was limited to 12GB of RAM, and the host system was already using 14GB of RAM. So the VM didn't have enough memory to run the benchmark. + +APP 2025-04-04T20:05:28.369Z --> sent request | {"jsonrpc":"2.0","id":319,"result":{"isIncomplete":false,"items":[{"label":"fault values, but the memory benchmark failed. I guess this is because the VM was limited to 12GB of RAM, and the host system was already using 14GB of RAM. So the VM didn't have enough memory to run the benchmark.","kind":1,"preselect":true,"detail":"fault values, but the memory benchmark failed. I guess this is because the VM was limited to 12GB of RAM, and the host system was already using 14GB of RAM. So the VM didn't have enough memory to run the benchmark.","insertText":"fault values, but the memory benchmark failed. I guess this is because the VM was limited to 12GB of RAM, and the host system was already using 14GB of RAM. So the VM didn't have enough memory to run the benchmark.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":258},"end":{"line":443,"character":258}}}]}]}} + +APP 2025-04-04T20:05:28.369Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:28.423Z --> received didChange | language: markdown | contentVersion: 1585 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:28.545Z --> received didChange | language: markdown | contentVersion: 1586 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:28.579Z --> received didChange | language: markdown | contentVersion: 1587 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:28.714Z --> received didChange | language: markdown | contentVersion: 1588 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:28.818Z --> received didChange | language: markdown | contentVersion: 1589 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:28.919Z --> received didChange | language: markdown | contentVersion: 1590 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:29.031Z --> received didChange | language: markdown | contentVersion: 1591 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:29.170Z --> received didChange | language: markdown | contentVersion: 1592 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:29.218Z --> received didChange | language: markdown | contentVersion: 1593 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:29.469Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":51,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":320} + +APP 2025-04-04T20:05:29.671Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run through with almost identical\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1593} + +APP 2025-04-04T20:05:29.671Z --> calling completion event + +APP 2025-04-04T20:05:29.671Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:29.671Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:29.671Z --> copilot | completion request + +APP 2025-04-04T20:05:29.672Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:29.693Z --> received didChange | language: markdown | contentVersion: 1594 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:29.699Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":52,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":321} + +APP 2025-04-04T20:05:29.785Z --> received didChange | language: markdown | contentVersion: 1595 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:29.829Z --> received didChange | language: markdown | contentVersion: 1596 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:29.901Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run through with almost identical re\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1596} + +APP 2025-04-04T20:05:29.901Z --> skipping because content is stale + +APP 2025-04-04T20:05:29.901Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:29.901Z --> sent request | {"jsonrpc":"2.0","id":321,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:29.991Z --> received didChange | language: markdown | contentVersion: 1597 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:30.069Z --> received didChange | language: markdown | contentVersion: 1598 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:30.273Z --> received didChange | language: markdown | contentVersion: 1599 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:30.343Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:05:30.344Z --> completion hints: | results, but the memory benchmark failed with `swap_pager: out of swap space`. This is because the VM was limited to 12GB of RAM, and the FreeBSD host had only 14GB of RAM. So the VM was using all available RAM, and the host didn't have enough left for the `ubench` process. + +APP 2025-04-04T20:05:30.344Z --> sent request | {"jsonrpc":"2.0","id":320,"result":{"isIncomplete":false,"items":[{"label":"results, but the memory benchmark failed with `swap_pager: out of swap space`. This is because the VM was limited to 12GB of RAM, and the FreeBSD host had only 14GB of RAM. So the VM was using all available RAM, and the host didn't have enough left for the `ubench` process.","kind":1,"preselect":true,"detail":"results, but the memory benchmark failed with `swap_pager: out of swap space`. This is because the VM was limited to 12GB of RAM, and the FreeBSD host had only 14GB of RAM. So the VM was using all available RAM, and the host didn't have enough left for the `ubench` process.","insertText":"results, but the memory benchmark failed with `swap_pager: out of swap space`. This is because the VM was limited to 12GB of RAM, and the FreeBSD host had only 14GB of RAM. So the VM was using all available RAM, and the host didn't have enough left for the `ubench` process.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":325},"end":{"line":443,"character":325}}}]}]}} + +APP 2025-04-04T20:05:30.344Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:30.524Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":57,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":322} + +APP 2025-04-04T20:05:30.727Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run through with almost identical resul\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1599} + +APP 2025-04-04T20:05:30.727Z --> calling completion event + +APP 2025-04-04T20:05:30.727Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:30.727Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:30.727Z --> copilot | completion request + +APP 2025-04-04T20:05:30.727Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:30.734Z --> received didChange | language: markdown | contentVersion: 1600 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:30.921Z --> received didChange | language: markdown | contentVersion: 1601 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:31.347Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:05:31.348Z --> completion hints: | ts, but the memory benchmark failed with `swap_pager: out of swap space`. This is because the VM was configured with 12GB of RAM, and the FreeBSD host had only 14GB of RAM. So the FreeBSD VM running inside Bhyve was not able to allocate enough memory for the benchmark. + +APP 2025-04-04T20:05:31.348Z --> sent request | {"jsonrpc":"2.0","id":322,"result":{"isIncomplete":false,"items":[{"label":"ts, but the memory benchmark failed with `swap_pager: out of swap space`. This is because the VM was configured with 12GB of RAM, and the FreeBSD host had only 14GB of RAM. So the FreeBSD VM running inside Bhyve was not able to allocate enough memory for the benchmark.","kind":1,"preselect":true,"detail":"ts, but the memory benchmark failed with `swap_pager: out of swap space`. This is because the VM was configured with 12GB of RAM, and the FreeBSD host had only 14GB of RAM. So the FreeBSD VM running inside Bhyve was not able to allocate enough memory for the benchmark.","insertText":"ts, but the memory benchmark failed with `swap_pager: out of swap space`. This is because the VM was configured with 12GB of RAM, and the FreeBSD host had only 14GB of RAM. So the FreeBSD VM running inside Bhyve was not able to allocate enough memory for the benchmark.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":326},"end":{"line":443,"character":326}}}]}]}} + +APP 2025-04-04T20:05:31.349Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:32.356Z --> received didChange | language: markdown | contentVersion: 1602 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:32.688Z --> received didChange | language: markdown | contentVersion: 1603 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:32.939Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":323} + +APP 2025-04-04T20:05:33.066Z --> received didChange | language: markdown | contentVersion: 1604 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:33.140Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nwThe CPU benchmark run through with almost identical results\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1604} + +APP 2025-04-04T20:05:33.140Z --> skipping because content is stale + +APP 2025-04-04T20:05:33.140Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:33.140Z --> sent request | {"jsonrpc":"2.0","id":323,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:33.227Z --> received didChange | language: markdown | contentVersion: 1605 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:34.086Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":324} + +APP 2025-04-04T20:05:34.289Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe CPU benchmark run through with almost identical results\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1605} + +APP 2025-04-04T20:05:34.289Z --> calling completion event + +APP 2025-04-04T20:05:34.289Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:34.289Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:34.289Z --> copilot | completion request + +APP 2025-04-04T20:05:34.289Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:34.308Z --> received didChange | language: markdown | contentVersion: 1606 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:34.922Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:05:34.922Z --> completion hints: | ubench process was killed because it tried to allocate more memory than the VM had available. I had to stop the VM and increase the RAM from 12GB to 14GB. After that, I could run the benchmark again. + +APP 2025-04-04T20:05:34.922Z --> sent request | {"jsonrpc":"2.0","id":324,"result":{"isIncomplete":false,"items":[{"label":"ubench process was killed because it tried to allocate more memory than the VM had available. I had to stop the VM and increase the RAM from 12GB to 14GB. After that, I could run the benchmark again.","kind":1,"preselect":true,"detail":"ubench process was killed because it tried to allocate more memory than the VM had available. I had to stop the VM and increase the RAM from 12GB to 14GB. After that, I could run the benchmark again.","insertText":"ubench process was killed because it tried to allocate more memory than the VM had available. I had to stop the VM and increase the RAM from 12GB to 14GB. After that, I could run the benchmark again.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":203},"end":{"line":443,"character":258}}}]}]}} + +APP 2025-04-04T20:05:34.923Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:35.046Z --> received didChange | language: markdown | contentVersion: 1607 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:35.271Z --> received didChange | language: markdown | contentVersion: 1608 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:35.371Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:35.402Z --> received didChange | language: markdown | contentVersion: 1609 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:35.486Z --> received didChange | language: markdown | contentVersion: 1610 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:35.760Z --> received didChange | language: markdown | contentVersion: 1611 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:36.601Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":65,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":325} + +APP 2025-04-04T20:05:36.803Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark run through with almost identical results\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1611} + +APP 2025-04-04T20:05:36.803Z --> calling completion event + +APP 2025-04-04T20:05:36.803Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:36.803Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:36.803Z --> copilot | completion request + +APP 2025-04-04T20:05:36.803Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:37.213Z --> received didChange | language: markdown | contentVersion: 1612 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:37.218Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":66,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":326} + +APP 2025-04-04T20:05:37.278Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:05:37.278Z --> completion hints: | as the FreeBSD host. The memory benchmark failed because the VM was running out of memory. I had to stop the other VMs to run the benchmark successfully. + +APP 2025-04-04T20:05:37.278Z --> sent request | {"jsonrpc":"2.0","id":325,"result":{"isIncomplete":false,"items":[{"label":"as the FreeBSD host. The memory benchmark failed because the VM was running out of memory. I had to stop the other VMs to run the benchmark successfully.","kind":1,"preselect":true,"detail":"as the FreeBSD host. The memory benchmark failed because the VM was running out of memory. I had to stop the other VMs to run the benchmark successfully.","insertText":"as the FreeBSD host. The memory benchmark failed because the VM was running out of memory. I had to stop the other VMs to run the benchmark successfully.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":218},"end":{"line":443,"character":218}}}]}]}} + +APP 2025-04-04T20:05:37.279Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:37.344Z --> received didChange | language: markdown | contentVersion: 1613 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:37.420Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark run through with almost identical results l\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1613} + +APP 2025-04-04T20:05:37.420Z --> skipping because content is stale + +APP 2025-04-04T20:05:37.420Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:37.420Z --> sent request | {"jsonrpc":"2.0","id":326,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:37.546Z --> received didChange | language: markdown | contentVersion: 1614 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:37.686Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:37.743Z --> received didChange | language: markdown | contentVersion: 1615 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:37.795Z --> received didChange | language: markdown | contentVersion: 1616 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:37.877Z --> failed to parse line: | failed to parse | Content-Length: 18613
+
+ + +APP 2025-04-04T20:05:37.878Z --> received didChange | language: markdown | contentVersion: 1617 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:37.884Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":71,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":327} + +APP 2025-04-04T20:05:37.957Z --> received didChange | language: markdown | contentVersion: 1618 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:38.085Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark run through with almost identical results like o\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1618} + +APP 2025-04-04T20:05:38.085Z --> skipping because content is stale + +APP 2025-04-04T20:05:38.085Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:38.085Z --> sent request | {"jsonrpc":"2.0","id":327,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:38.132Z --> received didChange | language: markdown | contentVersion: 1619 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:38.213Z --> received didChange | language: markdown | contentVersion: 1620 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:38.219Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":74,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":328} + +APP 2025-04-04T20:05:38.261Z --> received didChange | language: markdown | contentVersion: 1621 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:38.420Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark run through with almost identical results like on t\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1621} + +APP 2025-04-04T20:05:38.420Z --> skipping because content is stale + +APP 2025-04-04T20:05:38.420Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:38.420Z --> sent request | {"jsonrpc":"2.0","id":328,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:38.426Z --> received didChange | language: markdown | contentVersion: 1622 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:38.529Z --> received didChange | language: markdown | contentVersion: 1623 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:38.599Z --> received didChange | language: markdown | contentVersion: 1624 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:38.606Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":78,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":329} + +APP 2025-04-04T20:05:38.726Z --> received didChange | language: markdown | contentVersion: 1625 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:38.807Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark run through with almost identical results like on the F\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1625} + +APP 2025-04-04T20:05:38.807Z --> skipping because content is stale + +APP 2025-04-04T20:05:38.807Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:38.807Z --> sent request | {"jsonrpc":"2.0","id":329,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:38.942Z --> received didChange | language: markdown | contentVersion: 1626 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:38.951Z --> received didChange | language: markdown | contentVersion: 1627 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:39.095Z --> received didChange | language: markdown | contentVersion: 1628 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:39.347Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":82,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":330} + +APP 2025-04-04T20:05:39.358Z --> received didChange | language: markdown | contentVersion: 1629 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:39.386Z --> received didChange | language: markdown | contentVersion: 1630 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:39.549Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark run through with almost identical results like on the FreeBS\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1630} + +APP 2025-04-04T20:05:39.549Z --> skipping because content is stale + +APP 2025-04-04T20:05:39.549Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:39.549Z --> sent request | {"jsonrpc":"2.0","id":330,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:39.566Z --> received didChange | language: markdown | contentVersion: 1631 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:39.631Z --> received didChange | language: markdown | contentVersion: 1632 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:39.638Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":86,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":331} + +APP 2025-04-04T20:05:39.672Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:39.786Z --> received didChange | language: markdown | contentVersion: 1633 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:39.840Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark run through with almost identical results like on the FreeBSD h\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1633} + +APP 2025-04-04T20:05:39.840Z --> skipping because content is stale + +APP 2025-04-04T20:05:39.840Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:39.840Z --> sent request | {"jsonrpc":"2.0","id":331,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:39.944Z --> received didChange | language: markdown | contentVersion: 1634 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:39.995Z --> received didChange | language: markdown | contentVersion: 1635 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:40.249Z --> received didChange | language: markdown | contentVersion: 1636 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:40.251Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":90,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":332} + +APP 2025-04-04T20:05:40.443Z --> received didChange | language: markdown | contentVersion: 1637 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:40.448Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":91,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":333} + +APP 2025-04-04T20:05:40.596Z --> received didChange | language: markdown | contentVersion: 1638 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:40.650Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark run through with almost identical results like on the FreeBSD host s\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1638} + +APP 2025-04-04T20:05:40.650Z --> skipping because content is stale + +APP 2025-04-04T20:05:40.650Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:40.650Z --> sent request | {"jsonrpc":"2.0","id":333,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:40.725Z --> received didChange | language: markdown | contentVersion: 1639 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:40.727Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:40.794Z --> received didChange | language: markdown | contentVersion: 1640 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:40.985Z --> received didChange | language: markdown | contentVersion: 1641 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:41.021Z --> received didChange | language: markdown | contentVersion: 1642 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:41.090Z --> received didChange | language: markdown | contentVersion: 1643 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:41.328Z --> received didChange | language: markdown | contentVersion: 1644 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:41.342Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":98,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":334} + +APP 2025-04-04T20:05:41.342Z --> sent request | {"jsonrpc":"2.0","id":334,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:44.290Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:45.953Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":24,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":335} + +APP 2025-04-04T20:05:46.154Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark run through with almost identical results like on the FreeBSD host system.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1644} + +APP 2025-04-04T20:05:46.154Z --> calling completion event + +APP 2025-04-04T20:05:46.154Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:46.154Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:46.154Z --> copilot | completion request + +APP 2025-04-04T20:05:46.155Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:46.270Z --> received didChange | language: markdown | contentVersion: 1645 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:46.325Z --> received didChange | language: markdown | contentVersion: 1646 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:46.403Z --> received didChange | language: markdown | contentVersion: 1647 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:46.408Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":27,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":336} + +APP 2025-04-04T20:05:46.610Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in run through with almost identical results like on the FreeBSD host system.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1647} + +APP 2025-04-04T20:05:46.611Z --> calling completion event + +APP 2025-04-04T20:05:46.611Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:46.611Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:46.611Z --> copilot | completion request + +APP 2025-04-04T20:05:46.611Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:46.612Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:46.612Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:46.717Z --> received didChange | language: markdown | contentVersion: 1648 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:46.763Z --> received didChange | language: markdown | contentVersion: 1649 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:46.803Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:46.804Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:46.804Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:46.846Z --> received didChange | language: markdown | contentVersion: 1650 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:46.969Z --> received didChange | language: markdown | contentVersion: 1651 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:46.974Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":31,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":337} + +APP 2025-04-04T20:05:47.176Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the run through with almost identical results like on the FreeBSD host system.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1651} + +APP 2025-04-04T20:05:47.176Z --> calling completion event + +APP 2025-04-04T20:05:47.176Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:47.176Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:47.176Z --> copilot | completion request + +APP 2025-04-04T20:05:47.177Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:47.298Z --> received didChange | language: markdown | contentVersion: 1652 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:47.595Z --> received didChange | language: markdown | contentVersion: 1653 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:47.857Z --> received didChange | language: markdown | contentVersion: 1654 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:47.928Z --> received didChange | language: markdown | contentVersion: 1655 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:48.007Z --> received didChange | language: markdown | contentVersion: 1656 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:48.029Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:05:48.030Z --> completion hints: | FreeBSD VM failed because the VM was running out of memory. I had to increase the RAM of the VM to 14GB (as configured in the `rocky.conf` file) and re-run the benchmark. + +APP 2025-04-04T20:05:48.031Z --> sent request | {"jsonrpc":"2.0","id":337,"result":{"isIncomplete":false,"items":[{"label":"FreeBSD VM failed because the VM was running out of memory. I had to increase the RAM of the VM to 14GB (as configured in the `rocky.conf` file) and re-run the benchmark.","kind":1,"preselect":true,"detail":"FreeBSD VM failed because the VM was running out of memory. I had to increase the RAM of the VM to 14GB (as configured in the `rocky.conf` file) and re-run the benchmark.","insertText":"FreeBSD VM failed because the VM was running out of memory. I had to increase the RAM of the VM to 14GB (as configured in the `rocky.conf` file) and re-run the benchmark.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":201},"end":{"line":443,"character":275}}}]}]}} + +APP 2025-04-04T20:05:48.031Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:48.307Z --> received didChange | language: markdown | contentVersion: 1657 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:48.314Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":37,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":338} + +APP 2025-04-04T20:05:48.507Z --> received didChange | language: markdown | contentVersion: 1658 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:48.514Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve Vrun through with almost identical results like on the FreeBSD host system.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1658} + +APP 2025-04-04T20:05:48.514Z --> skipping because content is stale + +APP 2025-04-04T20:05:48.515Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:48.515Z --> sent request | {"jsonrpc":"2.0","id":338,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:49.164Z --> received didChange | language: markdown | contentVersion: 1659 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:49.233Z --> received didChange | language: markdown | contentVersion: 1660 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:49.239Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":40,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":339} + +APP 2025-04-04T20:05:49.441Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1660} + +APP 2025-04-04T20:05:49.441Z --> calling completion event + +APP 2025-04-04T20:05:49.441Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:49.442Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:49.442Z --> copilot | completion request + +APP 2025-04-04T20:05:49.442Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:50.077Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:05:50.078Z --> completion hints: | failed because the VM was running out of memory. I had to increase the RAM from 12GB to 14GB to get it to run. After that, it worked fine: + +APP 2025-04-04T20:05:50.078Z --> sent request | {"jsonrpc":"2.0","id":339,"result":{"isIncomplete":false,"items":[{"label":"failed because the VM was running out of memory. I had to increase the RAM from 12GB to 14GB to get it to run. After that, it worked fine:","kind":1,"preselect":true,"detail":"failed because the VM was running out of memory. I had to increase the RAM from 12GB to 14GB to get it to run. After that, it worked fine:","insertText":"failed because the VM was running out of memory. I had to increase the RAM from 12GB to 14GB to get it to run. After that, it worked fine:","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":178},"end":{"line":443,"character":252}}}]}]}} + +APP 2025-04-04T20:05:50.078Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:52.546Z --> received didChange | language: markdown | contentVersion: 1661 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:52.552Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":115,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":340} + +APP 2025-04-04T20:05:52.713Z --> received didChange | language: markdown | contentVersion: 1662 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:52.753Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. H\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1662} + +APP 2025-04-04T20:05:52.753Z --> skipping because content is stale + +APP 2025-04-04T20:05:52.753Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:52.753Z --> sent request | {"jsonrpc":"2.0","id":340,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:52.870Z --> received didChange | language: markdown | contentVersion: 1663 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:52.915Z --> received didChange | language: markdown | contentVersion: 1664 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:53.095Z --> received didChange | language: markdown | contentVersion: 1665 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:53.235Z --> received didChange | language: markdown | contentVersion: 1666 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:53.407Z --> received didChange | language: markdown | contentVersion: 1667 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:53.463Z --> received didChange | language: markdown | contentVersion: 1668 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:53.561Z --> received didChange | language: markdown | contentVersion: 1669 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:53.625Z --> received didChange | language: markdown | contentVersion: 1670 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:53.631Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":124,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":341} + +APP 2025-04-04T20:05:53.703Z --> received didChange | language: markdown | contentVersion: 1671 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:53.825Z --> received didChange | language: markdown | contentVersion: 1672 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:53.832Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, th\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1672} + +APP 2025-04-04T20:05:53.832Z --> skipping because content is stale + +APP 2025-04-04T20:05:53.832Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:53.832Z --> sent request | {"jsonrpc":"2.0","id":341,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:53.871Z --> received didChange | language: markdown | contentVersion: 1673 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:53.974Z --> received didChange | language: markdown | contentVersion: 1674 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:53.980Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":128,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":342} + +APP 2025-04-04T20:05:54.182Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the \n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1674} + +APP 2025-04-04T20:05:54.183Z --> calling completion event + +APP 2025-04-04T20:05:54.183Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:05:54.183Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:05:54.183Z --> copilot | completion request + +APP 2025-04-04T20:05:54.183Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:05:54.249Z --> received didChange | language: markdown | contentVersion: 1675 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:54.392Z --> received didChange | language: markdown | contentVersion: 1676 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:54.446Z --> received didChange | language: markdown | contentVersion: 1677 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:54.576Z --> received didChange | language: markdown | contentVersion: 1678 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:54.695Z --> received didChange | language: markdown | contentVersion: 1679 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:54.787Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:05:54.788Z --> completion hints: | single CPU benchmark was slightly slower. The memory benchmark was also slower, but I think this is due to the fact that the VM has only 12GB of RAM, and the host has 16GB of RAM. So the VM is running out of memory, and the kernel kills the process. + +APP 2025-04-04T20:05:54.788Z --> sent request | {"jsonrpc":"2.0","id":342,"result":{"isIncomplete":false,"items":[{"label":"single CPU benchmark was slightly slower. The memory benchmark was also slower, but I think this is due to the fact that the VM has only 12GB of RAM, and the host has 16GB of RAM. So the VM is running out of memory, and the kernel kills the process.","kind":1,"preselect":true,"detail":"single CPU benchmark was slightly slower. The memory benchmark was also slower, but I think this is due to the fact that the VM has only 12GB of RAM, and the host has 16GB of RAM. So the VM is running out of memory, and the kernel kills the process.","insertText":"single CPU benchmark was slightly slower. The memory benchmark was also slower, but I think this is due to the fact that the VM has only 12GB of RAM, and the host has 16GB of RAM. So the VM is running out of memory, and the kernel kills the process.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":377},"end":{"line":443,"character":377}}}]}]}} + +APP 2025-04-04T20:05:54.789Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:54.843Z --> received didChange | language: markdown | contentVersion: 1680 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:54.899Z --> received didChange | language: markdown | contentVersion: 1681 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:54.905Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":135,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":343} + +APP 2025-04-04T20:05:55.087Z --> received didChange | language: markdown | contentVersion: 1682 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:55.107Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory b\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1682} + +APP 2025-04-04T20:05:55.107Z --> skipping because content is stale + +APP 2025-04-04T20:05:55.107Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:55.107Z --> sent request | {"jsonrpc":"2.0","id":343,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:55.140Z --> received didChange | language: markdown | contentVersion: 1683 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:55.318Z --> received didChange | language: markdown | contentVersion: 1684 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:55.557Z --> received didChange | language: markdown | contentVersion: 1685 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:55.627Z --> received didChange | language: markdown | contentVersion: 1686 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:55.869Z --> received didChange | language: markdown | contentVersion: 1687 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:55.895Z --> received didChange | language: markdown | contentVersion: 1688 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:56.072Z --> received didChange | language: markdown | contentVersion: 1689 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:56.110Z --> received didChange | language: markdown | contentVersion: 1690 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:56.155Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:56.181Z --> received didChange | language: markdown | contentVersion: 1691 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:56.186Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":145,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":344} + +APP 2025-04-04T20:05:56.296Z --> received didChange | language: markdown | contentVersion: 1692 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:56.388Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark f\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1692} + +APP 2025-04-04T20:05:56.389Z --> skipping because content is stale + +APP 2025-04-04T20:05:56.389Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:56.389Z --> sent request | {"jsonrpc":"2.0","id":344,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:56.440Z --> received didChange | language: markdown | contentVersion: 1693 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:56.496Z --> received didChange | language: markdown | contentVersion: 1694 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:56.612Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:56.612Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:56.703Z --> received didChange | language: markdown | contentVersion: 1695 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:56.756Z --> received didChange | language: markdown | contentVersion: 1696 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:56.805Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:56.831Z --> received didChange | language: markdown | contentVersion: 1697 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:56.955Z --> received didChange | language: markdown | contentVersion: 1698 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:56.960Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":152,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":345} + +APP 2025-04-04T20:05:57.127Z --> received didChange | language: markdown | contentVersion: 1699 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:57.162Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed w\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1699} + +APP 2025-04-04T20:05:57.162Z --> skipping because content is stale + +APP 2025-04-04T20:05:57.162Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:57.162Z --> sent request | {"jsonrpc":"2.0","id":345,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:57.177Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:57.631Z --> received didChange | language: markdown | contentVersion: 1700 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:58.461Z --> received didChange | language: markdown | contentVersion: 1701 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:58.571Z --> received didChange | language: markdown | contentVersion: 1702 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:58.695Z --> received didChange | language: markdown | contentVersion: 1703 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:58.766Z --> received didChange | language: markdown | contentVersion: 1704 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:58.863Z --> received didChange | language: markdown | contentVersion: 1705 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:58.868Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":157,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":346} + +APP 2025-04-04T20:05:59.052Z --> received didChange | language: markdown | contentVersion: 1706 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:59.070Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with o\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1706} + +APP 2025-04-04T20:05:59.070Z --> skipping because content is stale + +APP 2025-04-04T20:05:59.070Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:59.070Z --> sent request | {"jsonrpc":"2.0","id":346,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:59.135Z --> received didChange | language: markdown | contentVersion: 1707 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:59.174Z --> received didChange | language: markdown | contentVersion: 1708 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:59.299Z --> received didChange | language: markdown | contentVersion: 1709 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:59.304Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":161,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":347} + +APP 2025-04-04T20:05:59.442Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:59.467Z --> received didChange | language: markdown | contentVersion: 1710 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:59.498Z --> received didChange | language: markdown | contentVersion: 1711 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:59.506Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1711} + +APP 2025-04-04T20:05:59.506Z --> skipping because content is stale + +APP 2025-04-04T20:05:59.506Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:59.506Z --> sent request | {"jsonrpc":"2.0","id":347,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:59.631Z --> received didChange | language: markdown | contentVersion: 1712 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:59.636Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":164,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":348} + +APP 2025-04-04T20:05:59.751Z --> received didChange | language: markdown | contentVersion: 1713 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:59.838Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of s\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1713} + +APP 2025-04-04T20:05:59.838Z --> skipping because content is stale + +APP 2025-04-04T20:05:59.838Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:05:59.838Z --> sent request | {"jsonrpc":"2.0","id":348,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:05:59.947Z --> received didChange | language: markdown | contentVersion: 1714 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:05:59.991Z --> received didChange | language: markdown | contentVersion: 1715 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:00.113Z --> received didChange | language: markdown | contentVersion: 1716 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:00.177Z --> received didChange | language: markdown | contentVersion: 1717 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:00.183Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":169,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":349} + +APP 2025-04-04T20:06:00.260Z --> received didChange | language: markdown | contentVersion: 1718 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:00.385Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap s\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1718} + +APP 2025-04-04T20:06:00.385Z --> skipping because content is stale + +APP 2025-04-04T20:06:00.385Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:00.385Z --> sent request | {"jsonrpc":"2.0","id":349,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:00.496Z --> received didChange | language: markdown | contentVersion: 1719 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:00.587Z --> received didChange | language: markdown | contentVersion: 1720 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:00.839Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":172,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":350} + +APP 2025-04-04T20:06:00.906Z --> received didChange | language: markdown | contentVersion: 1721 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:00.911Z --> received didChange | language: markdown | contentVersion: 1722 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:01.040Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1722} + +APP 2025-04-04T20:06:01.040Z --> skipping because content is stale + +APP 2025-04-04T20:06:01.040Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:01.040Z --> sent request | {"jsonrpc":"2.0","id":350,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:01.604Z --> received didChange | language: markdown | contentVersion: 1723 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:01.609Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":175,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":351} + +APP 2025-04-04T20:06:01.645Z --> received didChange | language: markdown | contentVersion: 1724 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:01.734Z --> received didChange | language: markdown | contentVersion: 1725 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:01.811Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space er\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1725} + +APP 2025-04-04T20:06:01.811Z --> skipping because content is stale + +APP 2025-04-04T20:06:01.811Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:01.811Z --> sent request | {"jsonrpc":"2.0","id":351,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:01.907Z --> received didChange | language: markdown | contentVersion: 1726 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:01.985Z --> received didChange | language: markdown | contentVersion: 1727 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:02.095Z --> received didChange | language: markdown | contentVersion: 1728 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:02.352Z --> received didChange | language: markdown | contentVersion: 1729 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:02.354Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":181,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":352} + +APP 2025-04-04T20:06:02.480Z --> received didChange | language: markdown | contentVersion: 1730 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:02.556Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1730} + +APP 2025-04-04T20:06:02.556Z --> skipping because content is stale + +APP 2025-04-04T20:06:02.556Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:02.556Z --> sent request | {"jsonrpc":"2.0","id":352,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:02.564Z --> received didChange | language: markdown | contentVersion: 1731 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:02.569Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":183,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":353} + +APP 2025-04-04T20:06:02.771Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. \n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1731} + +APP 2025-04-04T20:06:02.772Z --> calling completion event + +APP 2025-04-04T20:06:02.772Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:06:02.772Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:06:02.772Z --> copilot | completion request + +APP 2025-04-04T20:06:02.773Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:06:02.884Z --> received didChange | language: markdown | contentVersion: 1732 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:02.985Z --> received didChange | language: markdown | contentVersion: 1733 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:02.990Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":185,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":354} + +APP 2025-04-04T20:06:03.036Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:06:03.037Z --> completion hints: | + +APP 2025-04-04T20:06:03.037Z --> sent request | {"jsonrpc":"2.0","id":353,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":183},"end":{"line":443,"character":183}}}]}]}} + +APP 2025-04-04T20:06:03.037Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:03.144Z --> received didChange | language: markdown | contentVersion: 1734 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:03.191Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I a\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1734} + +APP 2025-04-04T20:06:03.191Z --> skipping because content is stale + +APP 2025-04-04T20:06:03.192Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:03.192Z --> sent request | {"jsonrpc":"2.0","id":354,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:03.216Z --> received didChange | language: markdown | contentVersion: 1735 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:03.278Z --> received didChange | language: markdown | contentVersion: 1736 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:03.284Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":188,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":355} + +APP 2025-04-04T20:06:03.419Z --> received didChange | language: markdown | contentVersion: 1737 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:03.485Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am n\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1737} + +APP 2025-04-04T20:06:03.485Z --> skipping because content is stale + +APP 2025-04-04T20:06:03.485Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:03.485Z --> sent request | {"jsonrpc":"2.0","id":355,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:03.562Z --> received didChange | language: markdown | contentVersion: 1738 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:03.614Z --> received didChange | language: markdown | contentVersion: 1739 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:03.765Z --> received didChange | language: markdown | contentVersion: 1740 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:03.770Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":192,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":356} + +APP 2025-04-04T20:06:03.972Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not \n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1740} + +APP 2025-04-04T20:06:03.973Z --> calling completion event + +APP 2025-04-04T20:06:03.973Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:06:03.973Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:06:03.973Z --> copilot | completion request + +APP 2025-04-04T20:06:03.973Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:06:03.984Z --> received didChange | language: markdown | contentVersion: 1741 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:04.155Z --> received didChange | language: markdown | contentVersion: 1742 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:04.184Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:04.184Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:06:04.184Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:06:04.290Z --> received didChange | language: markdown | contentVersion: 1743 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:04.481Z --> received didChange | language: markdown | contentVersion: 1744 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:04.682Z --> received didChange | language: markdown | contentVersion: 1745 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:04.704Z --> received didChange | language: markdown | contentVersion: 1746 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:04.964Z --> received didChange | language: markdown | contentVersion: 1747 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:05.021Z --> received didChange | language: markdown | contentVersion: 1748 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:05.028Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":200,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":357} + +APP 2025-04-04T20:06:05.100Z --> received didChange | language: markdown | contentVersion: 1749 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:05.180Z --> received didChange | language: markdown | contentVersion: 1750 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:05.229Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly su\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1750} + +APP 2025-04-04T20:06:05.229Z --> skipping because content is stale + +APP 2025-04-04T20:06:05.229Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:05.229Z --> sent request | {"jsonrpc":"2.0","id":357,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:05.311Z --> received didChange | language: markdown | contentVersion: 1751 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:05.331Z --> received didChange | language: markdown | contentVersion: 1752 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:05.436Z --> received didChange | language: markdown | contentVersion: 1753 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:05.441Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":205,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":358} + +APP 2025-04-04T20:06:05.585Z --> received didChange | language: markdown | contentVersion: 1754 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:05.643Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure w\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1754} + +APP 2025-04-04T20:06:05.643Z --> skipping because content is stale + +APP 2025-04-04T20:06:05.643Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:05.644Z --> sent request | {"jsonrpc":"2.0","id":358,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:05.682Z --> received didChange | language: markdown | contentVersion: 1755 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:05.853Z --> received didChange | language: markdown | contentVersion: 1756 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:06.103Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":208,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":359} + +APP 2025-04-04T20:06:06.305Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1756} + +APP 2025-04-04T20:06:06.305Z --> calling completion event + +APP 2025-04-04T20:06:06.305Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:06:06.306Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:06:06.306Z --> copilot | completion request + +APP 2025-04-04T20:06:06.306Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:06:07.487Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:06:07.488Z --> completion hints: | this happened. I had 12GB of RAM assigned to the VM, and the host system had 16GB of RAM. The FreeBSD host was running with 4GB of RAM free, and the VM was using 8GB of RAM. So it should have been enough memory available. + +APP 2025-04-04T20:06:07.488Z --> sent request | {"jsonrpc":"2.0","id":359,"result":{"isIncomplete":false,"items":[{"label":"this happened. I had 12GB of RAM assigned to the VM, and the host system had 16GB of RAM. The FreeBSD host was running with 4GB of RAM free, and the VM was using 8GB of RAM. So it should have been enough memory available.","kind":1,"preselect":true,"detail":"this happened. I had 12GB of RAM assigned to the VM, and the host system had 16GB of RAM. The FreeBSD host was running with 4GB of RAM free, and the VM was using 8GB of RAM. So it should have been enough memory available.","insertText":"this happened. I had 12GB of RAM assigned to the VM, and the host system had 16GB of RAM. The FreeBSD host was running with 4GB of RAM free, and the VM was using 8GB of RAM. So it should have been enough memory available.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":429},"end":{"line":443,"character":429}}}]}]}} + +APP 2025-04-04T20:06:07.489Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:12.774Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:13.259Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":208,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":360} + +APP 2025-04-04T20:06:13.393Z --> received didChange | language: markdown | contentVersion: 1757 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:13.455Z --> received didChange | language: markdown | contentVersion: 1758 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:13.461Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":210,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":361} + +APP 2025-04-04T20:06:13.662Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, \n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1758} + +APP 2025-04-04T20:06:13.663Z --> calling completion event + +APP 2025-04-04T20:06:13.663Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:06:13.663Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:06:13.663Z --> copilot | completion request + +APP 2025-04-04T20:06:13.663Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:06:13.697Z --> received didChange | language: markdown | contentVersion: 1759 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:13.770Z --> received didChange | language: markdown | contentVersion: 1760 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:13.832Z --> received didChange | language: markdown | contentVersion: 1761 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:13.973Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:13.975Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:06:13.975Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:06:13.986Z --> received didChange | language: markdown | contentVersion: 1762 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:13.993Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":214,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":362} + +APP 2025-04-04T20:06:14.184Z --> received didChange | language: markdown | contentVersion: 1763 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:14.184Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:14.194Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1763} + +APP 2025-04-04T20:06:14.194Z --> skipping because content is stale + +APP 2025-04-04T20:06:14.194Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:14.195Z --> sent request | {"jsonrpc":"2.0","id":362,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:14.257Z --> received didChange | language: markdown | contentVersion: 1764 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:14.263Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":216,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":363} + +APP 2025-04-04T20:06:14.438Z --> received didChange | language: markdown | contentVersion: 1765 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:14.463Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I a\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1765} + +APP 2025-04-04T20:06:14.463Z --> skipping because content is stale + +APP 2025-04-04T20:06:14.464Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:14.464Z --> sent request | {"jsonrpc":"2.0","id":363,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:14.520Z --> received didChange | language: markdown | contentVersion: 1766 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:14.591Z --> received didChange | language: markdown | contentVersion: 1767 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:14.597Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":219,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":364} + +APP 2025-04-04T20:06:14.737Z --> received didChange | language: markdown | contentVersion: 1768 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:14.799Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am n\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1768} + +APP 2025-04-04T20:06:14.799Z --> skipping because content is stale + +APP 2025-04-04T20:06:14.799Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:14.799Z --> sent request | {"jsonrpc":"2.0","id":364,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:14.888Z --> received didChange | language: markdown | contentVersion: 1769 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:14.956Z --> received didChange | language: markdown | contentVersion: 1770 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:15.045Z --> received didChange | language: markdown | contentVersion: 1771 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:15.050Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":223,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":365} + +APP 2025-04-04T20:06:15.251Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not \n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1771} + +APP 2025-04-04T20:06:15.251Z --> calling completion event + +APP 2025-04-04T20:06:15.251Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:06:15.251Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:06:15.251Z --> copilot | completion request + +APP 2025-04-04T20:06:15.252Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:06:15.450Z --> received didChange | language: markdown | contentVersion: 1772 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:15.530Z --> received didChange | language: markdown | contentVersion: 1773 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:15.678Z --> received didChange | language: markdown | contentVersion: 1774 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:15.874Z --> failed to parse line: | failed to parse | Content-Length: 18769
+
+ + +APP 2025-04-04T20:06:15.875Z --> received didChange | language: markdown | contentVersion: 1775 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:16.016Z --> received didChange | language: markdown | contentVersion: 1776 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:16.071Z --> received didChange | language: markdown | contentVersion: 1777 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:16.086Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:06:16.087Z --> completion hints: | using swap space on the FreeBSD host, and I also didn't configure any swap space on the Bhyve VM. So this is probably the reason for the failure. + +APP 2025-04-04T20:06:16.087Z --> sent request | {"jsonrpc":"2.0","id":365,"result":{"isIncomplete":false,"items":[{"label":"using swap space on the FreeBSD host, and I also didn't configure any swap space on the Bhyve VM. So this is probably the reason for the failure.","kind":1,"preselect":true,"detail":"using swap space on the FreeBSD host, and I also didn't configure any swap space on the Bhyve VM. So this is probably the reason for the failure.","insertText":"using swap space on the FreeBSD host, and I also didn't configure any swap space on the Bhyve VM. So this is probably the reason for the failure.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":368},"end":{"line":443,"character":368}}}]}]}} + +APP 2025-04-04T20:06:16.087Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:16.160Z --> received didChange | language: markdown | contentVersion: 1778 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:16.232Z --> received didChange | language: markdown | contentVersion: 1779 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:16.239Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":231,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":366} + +APP 2025-04-04T20:06:16.306Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:16.346Z --> received didChange | language: markdown | contentVersion: 1780 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:16.400Z --> received didChange | language: markdown | contentVersion: 1781 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:16.439Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further in\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1781} + +APP 2025-04-04T20:06:16.439Z --> skipping because content is stale + +APP 2025-04-04T20:06:16.439Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:16.440Z --> sent request | {"jsonrpc":"2.0","id":366,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:16.493Z --> received didChange | language: markdown | contentVersion: 1782 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:16.583Z --> received didChange | language: markdown | contentVersion: 1783 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:16.853Z --> received didChange | language: markdown | contentVersion: 1784 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:17.054Z --> received didChange | language: markdown | contentVersion: 1785 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:17.131Z --> received didChange | language: markdown | contentVersion: 1786 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:17.208Z --> received didChange | language: markdown | contentVersion: 1787 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:17.367Z --> received didChange | language: markdown | contentVersion: 1788 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:17.544Z --> received didChange | language: markdown | contentVersion: 1789 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:17.661Z --> received didChange | language: markdown | contentVersion: 1790 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:17.701Z --> received didChange | language: markdown | contentVersion: 1791 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:17.775Z --> received didChange | language: markdown | contentVersion: 1792 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:17.911Z --> received didChange | language: markdown | contentVersion: 1793 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:17.917Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":245,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":367} + +APP 2025-04-04T20:06:18.000Z --> received didChange | language: markdown | contentVersion: 1794 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:18.047Z --> received didChange | language: markdown | contentVersion: 1795 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:18.118Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating he\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases.\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1795} + +APP 2025-04-04T20:06:18.118Z --> skipping because content is stale + +APP 2025-04-04T20:06:18.118Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:18.118Z --> sent request | {"jsonrpc":"2.0","id":367,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:06:18.139Z --> received didChange | language: markdown | contentVersion: 1796 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:18.232Z --> received didChange | language: markdown | contentVersion: 1797 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:18.371Z --> received didChange | language: markdown | contentVersion: 1798 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:06:23.663Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:23.975Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:06:25.252Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:13.636Z --> received didChange | language: markdown | contentVersion: 1799 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:13.642Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":195,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":368} + +APP 2025-04-04T20:07:13.843Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1799} + +APP 2025-04-04T20:07:13.844Z --> calling completion event + +APP 2025-04-04T20:07:13.844Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:07:13.844Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:07:13.844Z --> copilot | completion request + +APP 2025-04-04T20:07:13.844Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:07:14.213Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:07:14.214Z --> completion hints: | + +APP 2025-04-04T20:07:14.214Z --> sent request | {"jsonrpc":"2.0","id":368,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":195},"end":{"line":452,"character":195}}}]}]}} + +APP 2025-04-04T20:07:14.214Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:17.128Z --> received didChange | language: markdown | contentVersion: 1800 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:17.418Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":46,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":369} + +APP 2025-04-04T20:07:17.620Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1800} + +APP 2025-04-04T20:07:17.621Z --> calling completion event + +APP 2025-04-04T20:07:17.621Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:07:17.621Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:07:17.621Z --> copilot | completion request + +APP 2025-04-04T20:07:17.621Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:07:17.776Z --> received didChange | language: markdown | contentVersion: 1801 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:17.816Z --> received didChange | language: markdown | contentVersion: 1802 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:17.910Z --> received didChange | language: markdown | contentVersion: 1803 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:17.996Z --> received didChange | language: markdown | contentVersion: 1804 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:18.003Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":50,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":370} + +APP 2025-04-04T20:07:18.080Z --> received didChange | language: markdown | contentVersion: 1805 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:18.141Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:07:18.143Z --> completion hints: | but it is not that much. The CPU benchmark was almost identical, and the memory benchmark was a bit slower. But I think this is acceptable for the flexibility we gain by running Linux VMs on FreeBSD. + +APP 2025-04-04T20:07:18.143Z --> sent request | {"jsonrpc":"2.0","id":369,"result":{"isIncomplete":false,"items":[{"label":"but it is not that much. The CPU benchmark was almost identical, and the memory benchmark was a bit slower. But I think this is acceptable for the flexibility we gain by running Linux VMs on FreeBSD.","kind":1,"preselect":true,"detail":"but it is not that much. The CPU benchmark was almost identical, and the memory benchmark was a bit slower. But I think this is acceptable for the flexibility we gain by running Linux VMs on FreeBSD.","insertText":"but it is not that much. The CPU benchmark was almost identical, and the memory benchmark was a bit slower. But I think this is acceptable for the flexibility we gain by running Linux VMs on FreeBSD.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":245},"end":{"line":452,"character":390}}}]}]}} + +APP 2025-04-04T20:07:18.143Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:18.173Z --> received didChange | language: markdown | contentVersion: 1806 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:18.204Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but thit is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1806} + +APP 2025-04-04T20:07:18.204Z --> skipping because content is stale + +APP 2025-04-04T20:07:18.204Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:18.204Z --> sent request | {"jsonrpc":"2.0","id":370,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:18.226Z --> received didChange | language: markdown | contentVersion: 1807 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:18.362Z --> received didChange | language: markdown | contentVersion: 1808 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:18.368Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":54,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":371} + +APP 2025-04-04T20:07:18.562Z --> received didChange | language: markdown | contentVersion: 1809 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:18.570Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the Cit is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1809} + +APP 2025-04-04T20:07:18.570Z --> skipping because content is stale + +APP 2025-04-04T20:07:18.570Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:18.570Z --> sent request | {"jsonrpc":"2.0","id":371,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:18.663Z --> received didChange | language: markdown | contentVersion: 1810 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:18.946Z --> received didChange | language: markdown | contentVersion: 1811 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:19.020Z --> received didChange | language: markdown | contentVersion: 1812 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:19.026Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":58,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":372} + +APP 2025-04-04T20:07:19.227Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1812} + +APP 2025-04-04T20:07:19.227Z --> calling completion event + +APP 2025-04-04T20:07:19.227Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:07:19.227Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:07:19.227Z --> copilot | completion request + +APP 2025-04-04T20:07:19.228Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:07:19.363Z --> received didChange | language: markdown | contentVersion: 1813 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:19.425Z --> received didChange | language: markdown | contentVersion: 1814 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:19.486Z --> received didChange | language: markdown | contentVersion: 1815 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:19.644Z --> received didChange | language: markdown | contentVersion: 1816 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:19.745Z --> received didChange | language: markdown | contentVersion: 1817 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:19.780Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:07:19.780Z --> completion hints: | performance is still very good. The memory performance is not as good, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post. + +APP 2025-04-04T20:07:19.780Z --> sent request | {"jsonrpc":"2.0","id":372,"result":{"isIncomplete":false,"items":[{"label":"performance is still very good. The memory performance is not as good, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","kind":1,"preselect":true,"detail":"performance is still very good. The memory performance is not as good, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","insertText":"performance is still very good. The memory performance is not as good, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":250},"end":{"line":452,"character":395}}}]}]}} + +APP 2025-04-04T20:07:19.780Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:19.832Z --> received didChange | language: markdown | contentVersion: 1818 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:19.957Z --> received didChange | language: markdown | contentVersion: 1819 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:20.024Z --> received didChange | language: markdown | contentVersion: 1820 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:20.170Z --> received didChange | language: markdown | contentVersion: 1821 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:20.217Z --> received didChange | language: markdown | contentVersion: 1822 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:20.293Z --> received didChange | language: markdown | contentVersion: 1823 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:20.491Z --> received didChange | language: markdown | contentVersion: 1824 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:20.496Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":70,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":373} + +APP 2025-04-04T20:07:20.697Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1824} + +APP 2025-04-04T20:07:20.698Z --> calling completion event + +APP 2025-04-04T20:07:20.698Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:07:20.698Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:07:20.698Z --> copilot | completion request + +APP 2025-04-04T20:07:20.698Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:07:21.191Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:07:21.192Z --> completion hints: | is still very good. The memory performance is not as good, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post. + +APP 2025-04-04T20:07:21.193Z --> sent request | {"jsonrpc":"2.0","id":373,"result":{"isIncomplete":false,"items":[{"label":"is still very good. The memory performance is not as good, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","kind":1,"preselect":true,"detail":"is still very good. The memory performance is not as good, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","insertText":"is still very good. The memory performance is not as good, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":250},"end":{"line":452,"character":395}}}]}]}} + +APP 2025-04-04T20:07:21.193Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:21.949Z --> received didChange | language: markdown | contentVersion: 1825 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:22.088Z --> received didChange | language: markdown | contentVersion: 1826 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:22.146Z --> received didChange | language: markdown | contentVersion: 1827 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:22.318Z --> received didChange | language: markdown | contentVersion: 1828 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:22.404Z --> received didChange | language: markdown | contentVersion: 1829 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:22.532Z --> received didChange | language: markdown | contentVersion: 1830 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:22.592Z --> received didChange | language: markdown | contentVersion: 1831 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:22.723Z --> received didChange | language: markdown | contentVersion: 1832 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:22.777Z --> received didChange | language: markdown | contentVersion: 1833 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:22.837Z --> received didChange | language: markdown | contentVersion: 1834 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:22.971Z --> received didChange | language: markdown | contentVersion: 1835 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:22.976Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":81,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":374} + +APP 2025-04-04T20:07:23.178Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1835} + +APP 2025-04-04T20:07:23.178Z --> calling completion event + +APP 2025-04-04T20:07:23.178Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:07:23.178Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:07:23.178Z --> copilot | completion request + +APP 2025-04-04T20:07:23.179Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:07:23.694Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:07:23.695Z --> completion hints: | is negligible. The memory performance is a bit worse, but I am not sure if this is due to the Bhyve overhead or the fact that the VM was running on a different host with different hardware. + +APP 2025-04-04T20:07:23.695Z --> sent request | {"jsonrpc":"2.0","id":374,"result":{"isIncomplete":false,"items":[{"label":"is negligible. The memory performance is a bit worse, but I am not sure if this is due to the Bhyve overhead or the fact that the VM was running on a different host with different hardware.","kind":1,"preselect":true,"detail":"is negligible. The memory performance is a bit worse, but I am not sure if this is due to the Bhyve overhead or the fact that the VM was running on a different host with different hardware.","insertText":"is negligible. The memory performance is a bit worse, but I am not sure if this is due to the Bhyve overhead or the fact that the VM was running on a different host with different hardware.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":270},"end":{"line":452,"character":415}}}]}]}} + +APP 2025-04-04T20:07:23.695Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:23.844Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:27.091Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":226,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":375} + +APP 2025-04-04T20:07:27.294Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1835} + +APP 2025-04-04T20:07:27.295Z --> calling completion event + +APP 2025-04-04T20:07:27.295Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:07:27.295Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:07:27.295Z --> copilot | completion request + +APP 2025-04-04T20:07:27.295Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:07:27.561Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:07:27.562Z --> completion hints: | + +APP 2025-04-04T20:07:27.562Z --> sent request | {"jsonrpc":"2.0","id":375,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":226},"end":{"line":452,"character":226}}}]}]}} + +APP 2025-04-04T20:07:27.562Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:27.622Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:28.809Z --> received didChange | language: markdown | contentVersion: 1836 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:28.907Z --> received didChange | language: markdown | contentVersion: 1837 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:28.986Z --> received didChange | language: markdown | contentVersion: 1838 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:29.142Z --> received didChange | language: markdown | contentVersion: 1839 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:29.147Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":230,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":376} + +APP 2025-04-04T20:07:29.227Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:29.349Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1839} + +APP 2025-04-04T20:07:29.349Z --> calling completion event + +APP 2025-04-04T20:07:29.349Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:07:29.349Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:07:29.349Z --> copilot | completion request + +APP 2025-04-04T20:07:29.350Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:07:29.362Z --> received didChange | language: markdown | contentVersion: 1840 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:29.424Z --> received didChange | language: markdown | contentVersion: 1841 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:29.555Z --> received didChange | language: markdown | contentVersion: 1842 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:29.703Z --> received didChange | language: markdown | contentVersion: 1843 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:29.822Z --> received didChange | language: markdown | contentVersion: 1844 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:29.943Z --> received didChange | language: markdown | contentVersion: 1845 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:30.020Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:07:30.022Z --> completion hints: | memory performance is a bit worse, but I am not sure if this is due to the Bhyve overhead or just the fact that the VM has only 14GB of RAM and the host has 64GB of RAM. I will investigate this further in the next blog post. + +APP 2025-04-04T20:07:30.023Z --> sent request | {"jsonrpc":"2.0","id":376,"result":{"isIncomplete":false,"items":[{"label":"memory performance is a bit worse, but I am not sure if this is due to the Bhyve overhead or just the fact that the VM has only 14GB of RAM and the host has 64GB of RAM. I will investigate this further in the next blog post.","kind":1,"preselect":true,"detail":"memory performance is a bit worse, but I am not sure if this is due to the Bhyve overhead or just the fact that the VM has only 14GB of RAM and the host has 64GB of RAM. I will investigate this further in the next blog post.","insertText":"memory performance is a bit worse, but I am not sure if this is due to the Bhyve overhead or just the fact that the VM has only 14GB of RAM and the host has 64GB of RAM. I will investigate this further in the next blog post.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":454},"end":{"line":452,"character":454}}}]}]}} + +APP 2025-04-04T20:07:30.023Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:30.024Z --> received didChange | language: markdown | contentVersion: 1846 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:30.662Z --> received didChange | language: markdown | contentVersion: 1847 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:30.673Z --> received didChange | language: markdown | contentVersion: 1848 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:30.699Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:30.850Z --> received didChange | language: markdown | contentVersion: 1849 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:31.007Z --> received didChange | language: markdown | contentVersion: 1850 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:31.081Z --> received didChange | language: markdown | contentVersion: 1851 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:31.323Z --> received didChange | language: markdown | contentVersion: 1852 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:31.356Z --> received didChange | language: markdown | contentVersion: 1853 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:31.527Z --> received didChange | language: markdown | contentVersion: 1854 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:31.571Z --> received didChange | language: markdown | contentVersion: 1855 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:31.674Z --> received didChange | language: markdown | contentVersion: 1856 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:31.680Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":247,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":377} + +APP 2025-04-04T20:07:31.865Z --> received didChange | language: markdown | contentVersion: 1857 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:31.880Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark i\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1857} + +APP 2025-04-04T20:07:31.880Z --> skipping because content is stale + +APP 2025-04-04T20:07:31.880Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:31.880Z --> sent request | {"jsonrpc":"2.0","id":377,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:31.896Z --> received didChange | language: markdown | contentVersion: 1858 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:31.959Z --> received didChange | language: markdown | contentVersion: 1859 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:31.964Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":250,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":378} + +APP 2025-04-04T20:07:32.166Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark in \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1859} + +APP 2025-04-04T20:07:32.167Z --> calling completion event + +APP 2025-04-04T20:07:32.167Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:07:32.167Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:07:32.167Z --> copilot | completion request + +APP 2025-04-04T20:07:32.167Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:07:32.683Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:07:32.683Z --> completion hints: | the FreeBSD VM failed, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this in the next blog post. + +APP 2025-04-04T20:07:32.684Z --> sent request | {"jsonrpc":"2.0","id":378,"result":{"isIncomplete":false,"items":[{"label":"the FreeBSD VM failed, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this in the next blog post.","kind":1,"preselect":true,"detail":"the FreeBSD VM failed, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this in the next blog post.","insertText":"the FreeBSD VM failed, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this in the next blog post.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":386},"end":{"line":452,"character":386}}}]}]}} + +APP 2025-04-04T20:07:32.684Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:33.179Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:37.296Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:38.165Z --> received didChange | language: markdown | contentVersion: 1860 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:38.386Z --> received didChange | language: markdown | contentVersion: 1861 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:38.573Z --> received didChange | language: markdown | contentVersion: 1862 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:39.350Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:39.585Z --> received didChange | language: markdown | contentVersion: 1863 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:39.812Z --> received didChange | language: markdown | contentVersion: 1864 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:39.966Z --> received didChange | language: markdown | contentVersion: 1865 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:40.216Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":250,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":379} + +APP 2025-04-04T20:07:40.417Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark see\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1865} + +APP 2025-04-04T20:07:40.418Z --> calling completion event + +APP 2025-04-04T20:07:40.418Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:07:40.418Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:07:40.418Z --> copilot | completion request + +APP 2025-04-04T20:07:40.419Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:07:40.667Z --> received didChange | language: markdown | contentVersion: 1866 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:40.726Z --> received didChange | language: markdown | contentVersion: 1867 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:40.880Z --> received didChange | language: markdown | contentVersion: 1868 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:40.886Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":253,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":380} + +APP 2025-04-04T20:07:40.977Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:07:40.978Z --> completion hints: | ms to be a bit more problematic, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post. + +APP 2025-04-04T20:07:40.978Z --> sent request | {"jsonrpc":"2.0","id":379,"result":{"isIncomplete":false,"items":[{"label":"ms to be a bit more problematic, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","kind":1,"preselect":true,"detail":"ms to be a bit more problematic, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","insertText":"ms to be a bit more problematic, but I am not sure if this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":404},"end":{"line":452,"character":404}}}]}]}} + +APP 2025-04-04T20:07:40.978Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:41.043Z --> received didChange | language: markdown | contentVersion: 1869 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:41.087Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems o\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1869} + +APP 2025-04-04T20:07:41.087Z --> skipping because content is stale + +APP 2025-04-04T20:07:41.087Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:41.087Z --> sent request | {"jsonrpc":"2.0","id":380,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:41.147Z --> received didChange | language: markdown | contentVersion: 1870 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:41.290Z --> received didChange | language: markdown | contentVersion: 1871 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:41.397Z --> received didChange | language: markdown | contentVersion: 1872 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:41.402Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":257,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":381} + +APP 2025-04-04T20:07:41.518Z --> received didChange | language: markdown | contentVersion: 1873 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:41.603Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1873} + +APP 2025-04-04T20:07:41.603Z --> skipping because content is stale + +APP 2025-04-04T20:07:41.603Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:41.603Z --> sent request | {"jsonrpc":"2.0","id":381,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:41.628Z --> received didChange | language: markdown | contentVersion: 1874 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:41.633Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":259,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":382} + +APP 2025-04-04T20:07:41.739Z --> received didChange | language: markdown | contentVersion: 1875 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:41.824Z --> received didChange | language: markdown | contentVersion: 1876 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:41.834Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a li\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1876} + +APP 2025-04-04T20:07:41.835Z --> skipping because content is stale + +APP 2025-04-04T20:07:41.835Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:41.835Z --> sent request | {"jsonrpc":"2.0","id":382,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:41.906Z --> received didChange | language: markdown | contentVersion: 1877 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:42.054Z --> received didChange | language: markdown | contentVersion: 1878 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:42.168Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:42.176Z --> received didChange | language: markdown | contentVersion: 1879 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:42.289Z --> received didChange | language: markdown | contentVersion: 1880 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:42.523Z --> received didChange | language: markdown | contentVersion: 1881 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:42.528Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":266,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":383} + +APP 2025-04-04T20:07:42.678Z --> received didChange | language: markdown | contentVersion: 1882 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:42.730Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little b\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1882} + +APP 2025-04-04T20:07:42.730Z --> skipping because content is stale + +APP 2025-04-04T20:07:42.730Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:42.730Z --> sent request | {"jsonrpc":"2.0","id":383,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:42.963Z --> received didChange | language: markdown | contentVersion: 1883 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:43.082Z --> received didChange | language: markdown | contentVersion: 1884 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:43.187Z --> received didChange | language: markdown | contentVersion: 1885 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:43.193Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":270,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":384} + +APP 2025-04-04T20:07:43.348Z --> received didChange | language: markdown | contentVersion: 1886 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:43.386Z --> received didChange | language: markdown | contentVersion: 1887 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:43.394Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit mo\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1887} + +APP 2025-04-04T20:07:43.395Z --> skipping because content is stale + +APP 2025-04-04T20:07:43.395Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:43.395Z --> sent request | {"jsonrpc":"2.0","id":384,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:43.442Z --> received didChange | language: markdown | contentVersion: 1888 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:43.486Z --> received didChange | language: markdown | contentVersion: 1889 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:43.630Z --> received didChange | language: markdown | contentVersion: 1890 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:43.662Z --> received didChange | language: markdown | contentVersion: 1891 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:43.668Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":276,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":385} + +APP 2025-04-04T20:07:43.864Z --> received didChange | language: markdown | contentVersion: 1892 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:43.869Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, b\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1892} + +APP 2025-04-04T20:07:43.869Z --> skipping because content is stale + +APP 2025-04-04T20:07:43.869Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:43.869Z --> sent request | {"jsonrpc":"2.0","id":385,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:43.909Z --> received didChange | language: markdown | contentVersion: 1893 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:43.972Z --> received didChange | language: markdown | contentVersion: 1894 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:44.119Z --> received didChange | language: markdown | contentVersion: 1895 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:44.125Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":280,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":386} + +APP 2025-04-04T20:07:44.270Z --> received didChange | language: markdown | contentVersion: 1896 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:44.325Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1896} + +APP 2025-04-04T20:07:44.326Z --> skipping because content is stale + +APP 2025-04-04T20:07:44.326Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:44.326Z --> sent request | {"jsonrpc":"2.0","id":386,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:44.344Z --> received didChange | language: markdown | contentVersion: 1897 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:44.349Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":282,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":387} + +APP 2025-04-04T20:07:44.457Z --> received didChange | language: markdown | contentVersion: 1898 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:44.551Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I a\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1898} + +APP 2025-04-04T20:07:44.551Z --> skipping because content is stale + +APP 2025-04-04T20:07:44.551Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:44.551Z --> sent request | {"jsonrpc":"2.0","id":387,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:44.556Z --> received didChange | language: markdown | contentVersion: 1899 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:44.628Z --> received didChange | language: markdown | contentVersion: 1900 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:44.634Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":285,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":388} + +APP 2025-04-04T20:07:44.768Z --> received didChange | language: markdown | contentVersion: 1901 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:44.834Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am n\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1901} + +APP 2025-04-04T20:07:44.835Z --> skipping because content is stale + +APP 2025-04-04T20:07:44.835Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:44.835Z --> sent request | {"jsonrpc":"2.0","id":388,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:44.921Z --> received didChange | language: markdown | contentVersion: 1902 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:45.041Z --> received didChange | language: markdown | contentVersion: 1903 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:45.120Z --> received didChange | language: markdown | contentVersion: 1904 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:45.125Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":289,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":389} + +APP 2025-04-04T20:07:45.266Z --> received didChange | language: markdown | contentVersion: 1905 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:45.326Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not s\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1905} + +APP 2025-04-04T20:07:45.327Z --> skipping because content is stale + +APP 2025-04-04T20:07:45.327Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:45.327Z --> sent request | {"jsonrpc":"2.0","id":389,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:45.367Z --> received didChange | language: markdown | contentVersion: 1906 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:45.496Z --> received didChange | language: markdown | contentVersion: 1907 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:45.541Z --> received didChange | language: markdown | contentVersion: 1908 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:45.606Z --> received didChange | language: markdown | contentVersion: 1909 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:45.612Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":294,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":390} + +APP 2025-04-04T20:07:45.736Z --> received didChange | language: markdown | contentVersion: 1910 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:45.814Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure w\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1910} + +APP 2025-04-04T20:07:45.814Z --> skipping because content is stale + +APP 2025-04-04T20:07:45.814Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:45.814Z --> sent request | {"jsonrpc":"2.0","id":390,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:45.871Z --> received didChange | language: markdown | contentVersion: 1911 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:45.959Z --> received didChange | language: markdown | contentVersion: 1912 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:46.063Z --> received didChange | language: markdown | contentVersion: 1913 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:46.174Z --> received didChange | language: markdown | contentVersion: 1914 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:46.252Z --> received didChange | language: markdown | contentVersion: 1915 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:46.334Z --> received didChange | language: markdown | contentVersion: 1916 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:46.429Z --> received didChange | language: markdown | contentVersion: 1917 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:46.435Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":302,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":391} + +APP 2025-04-04T20:07:46.559Z --> received didChange | language: markdown | contentVersion: 1918 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:46.635Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether t\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1918} + +APP 2025-04-04T20:07:46.636Z --> skipping because content is stale + +APP 2025-04-04T20:07:46.636Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:46.636Z --> sent request | {"jsonrpc":"2.0","id":391,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:46.649Z --> received didChange | language: markdown | contentVersion: 1919 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:46.706Z --> received didChange | language: markdown | contentVersion: 1920 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:46.713Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":305,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":392} + +APP 2025-04-04T20:07:46.812Z --> received didChange | language: markdown | contentVersion: 1921 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:46.913Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to t\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1921} + +APP 2025-04-04T20:07:46.914Z --> skipping because content is stale + +APP 2025-04-04T20:07:46.914Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:46.914Z --> sent request | {"jsonrpc":"2.0","id":392,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:46.994Z --> received didChange | language: markdown | contentVersion: 1922 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:47.080Z --> received didChange | language: markdown | contentVersion: 1923 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:47.208Z --> received didChange | language: markdown | contentVersion: 1924 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:47.383Z --> received didChange | language: markdown | contentVersion: 1925 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:47.444Z --> received didChange | language: markdown | contentVersion: 1926 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:47.450Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":311,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":393} + +APP 2025-04-04T20:07:47.569Z --> received didChange | language: markdown | contentVersion: 1927 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:47.647Z --> failed to parse line: | failed to parse | Content-Length: 18911
+
+ + +APP 2025-04-04T20:07:47.647Z --> received didChange | language: markdown | contentVersion: 1928 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:47.652Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust th\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1928} + +APP 2025-04-04T20:07:47.652Z --> skipping because content is stale + +APP 2025-04-04T20:07:47.652Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:47.652Z --> sent request | {"jsonrpc":"2.0","id":393,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:47.676Z --> received didChange | language: markdown | contentVersion: 1929 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:47.788Z --> received didChange | language: markdown | contentVersion: 1930 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:47.816Z --> received didChange | language: markdown | contentVersion: 1931 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:47.821Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":316,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":394} + +APP 2025-04-04T20:07:47.988Z --> received didChange | language: markdown | contentVersion: 1932 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:48.022Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this b\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1932} + +APP 2025-04-04T20:07:48.022Z --> skipping because content is stale + +APP 2025-04-04T20:07:48.022Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:48.022Z --> sent request | {"jsonrpc":"2.0","id":394,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:48.050Z --> received didChange | language: markdown | contentVersion: 1933 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:48.211Z --> received didChange | language: markdown | contentVersion: 1934 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:48.305Z --> received didChange | language: markdown | contentVersion: 1935 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:48.371Z --> received didChange | language: markdown | contentVersion: 1936 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:48.598Z --> received didChange | language: markdown | contentVersion: 1937 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:48.626Z --> received didChange | language: markdown | contentVersion: 1938 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:48.759Z --> received didChange | language: markdown | contentVersion: 1939 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:48.781Z --> received didChange | language: markdown | contentVersion: 1940 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:48.892Z --> received didChange | language: markdown | contentVersion: 1941 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:48.897Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":326,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":395} + +APP 2025-04-04T20:07:49.099Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1941} + +APP 2025-04-04T20:07:49.099Z --> calling completion event + +APP 2025-04-04T20:07:49.099Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:07:49.099Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:07:49.099Z --> copilot | completion request + +APP 2025-04-04T20:07:49.100Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:07:49.239Z --> received didChange | language: markdown | contentVersion: 1942 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:49.364Z --> received didChange | language: markdown | contentVersion: 1943 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:49.370Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:07:49.371Z --> completion hints: | as it failed. + +APP 2025-04-04T20:07:49.371Z --> sent request | {"jsonrpc":"2.0","id":395,"result":{"isIncomplete":false,"items":[{"label":"as it failed.","kind":1,"preselect":true,"detail":"as it failed.","insertText":"as it failed.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":339},"end":{"line":452,"character":339}}}]}]}} + +APP 2025-04-04T20:07:49.371Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:49.425Z --> received didChange | language: markdown | contentVersion: 1944 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:49.430Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":329,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":396} + +APP 2025-04-04T20:07:49.561Z --> received didChange | language: markdown | contentVersion: 1945 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:49.631Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or n\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1945} + +APP 2025-04-04T20:07:49.631Z --> skipping because content is stale + +APP 2025-04-04T20:07:49.631Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:49.631Z --> sent request | {"jsonrpc":"2.0","id":396,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:49.701Z --> received didChange | language: markdown | contentVersion: 1946 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:49.794Z --> received didChange | language: markdown | contentVersion: 1947 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:49.957Z --> received didChange | language: markdown | contentVersion: 1948 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:50.045Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":333,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":397} + +APP 2025-04-04T20:07:50.046Z --> sent request | {"jsonrpc":"2.0","id":397,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:50.419Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:51.336Z --> received didChange | language: markdown | contentVersion: 1949 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:51.341Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":334,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":398} + +APP 2025-04-04T20:07:51.543Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1949} + +APP 2025-04-04T20:07:51.543Z --> calling completion event + +APP 2025-04-04T20:07:51.543Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:07:51.543Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:07:51.543Z --> copilot | completion request + +APP 2025-04-04T20:07:51.544Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:07:51.831Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:07:51.831Z --> completion hints: | + +APP 2025-04-04T20:07:51.831Z --> sent request | {"jsonrpc":"2.0","id":398,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":334},"end":{"line":452,"character":334}}}]}]}} + +APP 2025-04-04T20:07:51.831Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:53.194Z --> received didChange | language: markdown | contentVersion: 1950 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:53.289Z --> received didChange | language: markdown | contentVersion: 1951 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:53.295Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":336,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":399} + +APP 2025-04-04T20:07:53.376Z --> received didChange | language: markdown | contentVersion: 1952 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:53.470Z --> received didChange | language: markdown | contentVersion: 1953 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:53.497Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. I am\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1953} + +APP 2025-04-04T20:07:53.498Z --> skipping because content is stale + +APP 2025-04-04T20:07:53.498Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:53.498Z --> sent request | {"jsonrpc":"2.0","id":399,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:53.518Z --> received didChange | language: markdown | contentVersion: 1954 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:53.524Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":339,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":400} + +APP 2025-04-04T20:07:53.679Z --> received didChange | language: markdown | contentVersion: 1955 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:53.726Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. I am n\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1955} + +APP 2025-04-04T20:07:53.726Z --> skipping because content is stale + +APP 2025-04-04T20:07:53.726Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:53.726Z --> sent request | {"jsonrpc":"2.0","id":400,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:53.843Z --> received didChange | language: markdown | contentVersion: 1956 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:53.901Z --> received didChange | language: markdown | contentVersion: 1957 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:53.988Z --> received didChange | language: markdown | contentVersion: 1958 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:53.994Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":343,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":401} + +APP 2025-04-04T20:07:54.152Z --> received didChange | language: markdown | contentVersion: 1959 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:54.195Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. I am not g\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1959} + +APP 2025-04-04T20:07:54.195Z --> skipping because content is stale + +APP 2025-04-04T20:07:54.195Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:07:54.195Z --> sent request | {"jsonrpc":"2.0","id":401,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:07:54.597Z --> received didChange | language: markdown | contentVersion: 1960 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:54.800Z --> received didChange | language: markdown | contentVersion: 1961 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:54.992Z --> received didChange | language: markdown | contentVersion: 1962 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:55.206Z --> received didChange | language: markdown | contentVersion: 1963 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:55.394Z --> received didChange | language: markdown | contentVersion: 1964 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:55.620Z --> received didChange | language: markdown | contentVersion: 1965 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:55.866Z --> received didChange | language: markdown | contentVersion: 1966 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:56.080Z --> received didChange | language: markdown | contentVersion: 1967 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:56.304Z --> received didChange | language: markdown | contentVersion: 1968 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:56.493Z --> received didChange | language: markdown | contentVersion: 1969 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:07:59.101Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:01.544Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:03.415Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":334,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":402} + +APP 2025-04-04T20:08:03.619Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1969} + +APP 2025-04-04T20:08:03.619Z --> calling completion event + +APP 2025-04-04T20:08:03.619Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:08:03.620Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:08:03.620Z --> copilot | completion request + +APP 2025-04-04T20:08:03.620Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:08:03.704Z --> received didChange | language: markdown | contentVersion: 1970 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:03.814Z --> received didChange | language: markdown | contentVersion: 1971 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:03.912Z --> received didChange | language: markdown | contentVersion: 1972 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:03.913Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:08:03.914Z --> completion hints: | + +APP 2025-04-04T20:08:03.914Z --> sent request | {"jsonrpc":"2.0","id":402,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":334},"end":{"line":452,"character":334}}}]}]}} + +APP 2025-04-04T20:08:03.914Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:04.011Z --> received didChange | language: markdown | contentVersion: 1973 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:04.017Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":338,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":403} + +APP 2025-04-04T20:08:04.186Z --> received didChange | language: markdown | contentVersion: 1974 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:04.219Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For m\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1974} + +APP 2025-04-04T20:08:04.219Z --> skipping because content is stale + +APP 2025-04-04T20:08:04.219Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:04.219Z --> sent request | {"jsonrpc":"2.0","id":403,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:08:04.407Z --> received didChange | language: markdown | contentVersion: 1975 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:04.485Z --> received didChange | language: markdown | contentVersion: 1976 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:04.490Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":341,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":404} + +APP 2025-04-04T20:08:04.696Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1976} + +APP 2025-04-04T20:08:04.696Z --> calling completion event + +APP 2025-04-04T20:08:04.696Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:08:04.696Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:08:04.696Z --> copilot | completion request + +APP 2025-04-04T20:08:04.696Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:08:04.725Z --> received didChange | language: markdown | contentVersion: 1977 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:04.800Z --> failed to parse line: | failed to parse | Content-Length: 18941
+
+ + +APP 2025-04-04T20:08:04.801Z --> received didChange | language: markdown | contentVersion: 1978 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:04.971Z --> received didChange | language: markdown | contentVersion: 1979 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:05.143Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:08:05.144Z --> completion hints: | use cases, I don't need to run memory-intensive workloads in the VMs, so I am not too concerned about this. + +APP 2025-04-04T20:08:05.144Z --> sent request | {"jsonrpc":"2.0","id":404,"result":{"isIncomplete":false,"items":[{"label":"use cases, I don't need to run memory-intensive workloads in the VMs, so I am not too concerned about this.","kind":1,"preselect":true,"detail":"use cases, I don't need to run memory-intensive workloads in the VMs, so I am not too concerned about this.","insertText":"use cases, I don't need to run memory-intensive workloads in the VMs, so I am not too concerned about this.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":448},"end":{"line":452,"character":448}}}]}]}} + +APP 2025-04-04T20:08:05.144Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:05.176Z --> received didChange | language: markdown | contentVersion: 1980 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:05.403Z --> received didChange | language: markdown | contentVersion: 1981 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:05.409Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":346,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":405} + +APP 2025-04-04T20:08:05.610Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my uses \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1981} + +APP 2025-04-04T20:08:05.610Z --> calling completion event + +APP 2025-04-04T20:08:05.610Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:08:05.610Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:08:05.610Z --> copilot | completion request + +APP 2025-04-04T20:08:05.610Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:08:05.962Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:08:05.963Z --> completion hints: | cases, I don't need to run memory-intensive applications in the VMs, so I am not too concerned about this. + +APP 2025-04-04T20:08:05.963Z --> sent request | {"jsonrpc":"2.0","id":405,"result":{"isIncomplete":false,"items":[{"label":"cases, I don't need to run memory-intensive applications in the VMs, so I am not too concerned about this.","kind":1,"preselect":true,"detail":"cases, I don't need to run memory-intensive applications in the VMs, so I am not too concerned about this.","insertText":"cases, I don't need to run memory-intensive applications in the VMs, so I am not too concerned about this.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":452},"end":{"line":452,"character":452}}}]}]}} + +APP 2025-04-04T20:08:05.963Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:06.039Z --> received didChange | language: markdown | contentVersion: 1982 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:06.251Z --> received didChange | language: markdown | contentVersion: 1983 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:06.688Z --> received didChange | language: markdown | contentVersion: 1984 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:06.697Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":345,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":406} + +APP 2025-04-04T20:08:06.866Z --> received didChange | language: markdown | contentVersion: 1985 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:06.900Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use c\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1985} + +APP 2025-04-04T20:08:06.900Z --> skipping because content is stale + +APP 2025-04-04T20:08:06.900Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:06.900Z --> sent request | {"jsonrpc":"2.0","id":406,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:08:07.025Z --> received didChange | language: markdown | contentVersion: 1986 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:07.080Z --> received didChange | language: markdown | contentVersion: 1987 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:07.254Z --> received didChange | language: markdown | contentVersion: 1988 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:07.427Z --> received didChange | language: markdown | contentVersion: 1989 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:07.521Z --> received didChange | language: markdown | contentVersion: 1990 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:07.567Z --> received didChange | language: markdown | contentVersion: 1991 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:07.573Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":352,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":407} + +APP 2025-04-04T20:08:07.775Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1991} + +APP 2025-04-04T20:08:07.775Z --> calling completion event + +APP 2025-04-04T20:08:07.776Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:08:07.776Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:08:07.776Z --> copilot | completion request + +APP 2025-04-04T20:08:07.776Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:08:07.787Z --> received didChange | language: markdown | contentVersion: 1992 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:07.901Z --> received didChange | language: markdown | contentVersion: 1993 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:07.988Z --> received didChange | language: markdown | contentVersion: 1994 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:07.993Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":355,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":408} + +APP 2025-04-04T20:08:08.140Z --> received didChange | language: markdown | contentVersion: 1995 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:08.195Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it w\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1995} + +APP 2025-04-04T20:08:08.195Z --> skipping because content is stale + +APP 2025-04-04T20:08:08.195Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:08.195Z --> sent request | {"jsonrpc":"2.0","id":408,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:08:08.201Z --> received didChange | language: markdown | contentVersion: 1996 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:08.318Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:08:08.319Z --> completion hints: | I don't need to run memory-hungry applications in the VMs, so I am not too concerned about this. + +APP 2025-04-04T20:08:08.319Z --> sent request | {"jsonrpc":"2.0","id":407,"result":{"isIncomplete":false,"items":[{"label":"I don't need to run memory-hungry applications in the VMs, so I am not too concerned about this.","kind":1,"preselect":true,"detail":"I don't need to run memory-hungry applications in the VMs, so I am not too concerned about this.","insertText":"I don't need to run memory-hungry applications in the VMs, so I am not too concerned about this.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":448},"end":{"line":452,"character":448}}}]}]}} + +APP 2025-04-04T20:08:08.319Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:08.452Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":357,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":409} + +APP 2025-04-04T20:08:08.465Z --> received didChange | language: markdown | contentVersion: 1997 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:08.630Z --> received didChange | language: markdown | contentVersion: 1998 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:08.654Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":1998} + +APP 2025-04-04T20:08:08.654Z --> skipping because content is stale + +APP 2025-04-04T20:08:08.654Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:08.654Z --> sent request | {"jsonrpc":"2.0","id":409,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:08:08.684Z --> received didChange | language: markdown | contentVersion: 1999 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:08.689Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":360,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":410} + +APP 2025-04-04T20:08:08.866Z --> received didChange | language: markdown | contentVersion: 2000 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:08.891Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will s\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2000} + +APP 2025-04-04T20:08:08.891Z --> skipping because content is stale + +APP 2025-04-04T20:08:08.892Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:08.892Z --> sent request | {"jsonrpc":"2.0","id":410,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:08:09.083Z --> received didChange | language: markdown | contentVersion: 2001 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:09.102Z --> received didChange | language: markdown | contentVersion: 2002 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:09.349Z --> received didChange | language: markdown | contentVersion: 2003 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:09.489Z --> received didChange | language: markdown | contentVersion: 2004 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:09.528Z --> received didChange | language: markdown | contentVersion: 2005 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:09.534Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":366,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":411} + +APP 2025-04-04T20:08:09.714Z --> received didChange | language: markdown | contentVersion: 2006 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:09.736Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still b\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2006} + +APP 2025-04-04T20:08:09.736Z --> skipping because content is stale + +APP 2025-04-04T20:08:09.736Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:09.736Z --> sent request | {"jsonrpc":"2.0","id":411,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:08:09.776Z --> received didChange | language: markdown | contentVersion: 2007 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:09.888Z --> received didChange | language: markdown | contentVersion: 2008 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:09.894Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":369,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":412} + +APP 2025-04-04T20:08:10.054Z --> received didChange | language: markdown | contentVersion: 2009 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:10.095Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be f\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2009} + +APP 2025-04-04T20:08:10.095Z --> skipping because content is stale + +APP 2025-04-04T20:08:10.095Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:10.095Z --> sent request | {"jsonrpc":"2.0","id":412,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:08:10.384Z --> received didChange | language: markdown | contentVersion: 2010 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:10.452Z --> received didChange | language: markdown | contentVersion: 2011 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:10.643Z --> received didChange | language: markdown | contentVersion: 2012 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:10.706Z --> received didChange | language: markdown | contentVersion: 2013 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:10.711Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":374,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":413} + +APP 2025-04-04T20:08:10.864Z --> received didChange | language: markdown | contentVersion: 2014 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:10.913Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast n\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2014} + +APP 2025-04-04T20:08:10.913Z --> skipping because content is stale + +APP 2025-04-04T20:08:10.913Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:10.913Z --> sent request | {"jsonrpc":"2.0","id":413,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:08:11.356Z --> received didChange | language: markdown | contentVersion: 2015 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:11.414Z --> received didChange | language: markdown | contentVersion: 2016 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:11.589Z --> received didChange | language: markdown | contentVersion: 2017 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:11.778Z --> received didChange | language: markdown | contentVersion: 2018 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:11.879Z --> received didChange | language: markdown | contentVersion: 2019 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:12.031Z --> received didChange | language: markdown | contentVersion: 2020 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:12.124Z --> received didChange | language: markdown | contentVersion: 2021 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:12.364Z --> received didChange | language: markdown | contentVersion: 2022 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:08:12.375Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":381,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":414} + +APP 2025-04-04T20:08:12.576Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2022} + +APP 2025-04-04T20:08:12.576Z --> calling completion event + +APP 2025-04-04T20:08:12.576Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:08:12.576Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:08:12.576Z --> copilot | completion request + +APP 2025-04-04T20:08:12.577Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:08:12.823Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:08:12.824Z --> completion hints: | + +APP 2025-04-04T20:08:12.824Z --> sent request | {"jsonrpc":"2.0","id":414,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":381},"end":{"line":452,"character":381}}}]}]}} + +APP 2025-04-04T20:08:12.824Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:13.621Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:14.697Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:15.610Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:17.777Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:08:22.577Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:09:29.058Z --> received didChange | language: markdown | contentVersion: 2023 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:29.063Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":209,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":415} + +APP 2025-04-04T20:09:29.146Z --> received didChange | language: markdown | contentVersion: 2024 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:29.240Z --> received didChange | language: markdown | contentVersion: 2025 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:29.265Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why as, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2025} + +APP 2025-04-04T20:09:29.265Z --> skipping because content is stale + +APP 2025-04-04T20:09:29.265Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:09:29.265Z --> sent request | {"jsonrpc":"2.0","id":415,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:09:29.506Z --> received didChange | language: markdown | contentVersion: 2026 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:29.512Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":212,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":416} + +APP 2025-04-04T20:09:29.713Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why as , but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2026} + +APP 2025-04-04T20:09:29.713Z --> calling completion event + +APP 2025-04-04T20:09:29.714Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:09:29.714Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:09:29.714Z --> copilot | completion request + +APP 2025-04-04T20:09:29.714Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:09:30.241Z --> received didChange | language: markdown | contentVersion: 2027 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:30.303Z --> received didChange | language: markdown | contentVersion: 2028 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:30.398Z --> received didChange | language: markdown | contentVersion: 2029 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:30.505Z --> received didChange | language: markdown | contentVersion: 2030 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:30.511Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":216,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":417} + +APP 2025-04-04T20:09:30.559Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:09:30.562Z --> completion hints: | the VM has 14GB of RAM and the host has 16GB of RAM. I suspect that the Bhyve VM is not able to use all the memory available on the host system. I will investigate this further in the next blog post, but for now, I will ignore it as it doesn't affect my use case. The CPU benchmark results are good enough for me. + +APP 2025-04-04T20:09:30.562Z --> sent request | {"jsonrpc":"2.0","id":416,"result":{"isIncomplete":false,"items":[{"label":"the VM has 14GB of RAM and the host has 16GB of RAM. I suspect that the Bhyve VM is not able to use all the memory available on the host system. I will investigate this further in the next blog post, but for now, I will ignore it as it doesn't affect my use case. The CPU benchmark results are good enough for me.","kind":1,"preselect":true,"detail":"the VM has 14GB of RAM and the host has 16GB of RAM. I suspect that the Bhyve VM is not able to use all the memory available on the host system. I will investigate this further in the next blog post, but for now, I will ignore it as it doesn't affect my use case. The CPU benchmark results are good enough for me.","insertText":"the VM has 14GB of RAM and the host has 16GB of RAM. I suspect that the Bhyve VM is not able to use all the memory available on the host system. I will investigate this further in the next blog post, but for now, I will ignore it as it doesn't affect my use case. The CPU benchmark results are good enough for me.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":525},"end":{"line":443,"character":567}}}]}]}} + +APP 2025-04-04T20:09:30.562Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:09:30.711Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why as the , but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2030} + +APP 2025-04-04T20:09:30.712Z --> calling completion event + +APP 2025-04-04T20:09:30.712Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:09:30.712Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:09:30.712Z --> copilot | completion request + +APP 2025-04-04T20:09:30.713Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:09:30.846Z --> received didChange | language: markdown | contentVersion: 2031 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:31.395Z --> received didChange | language: markdown | contentVersion: 2032 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:31.449Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:09:31.451Z --> completion hints: | VM has 14GB of RAM and the host has 16GB of RAM. I also tried to increase the VM memory to 16GB, but it didn't help. I also tried to increase the swap space on the host, but that didn't help either. I am not sure whether this is a bug in `ubench` or a limitation of Bhyve. I will investigate this further in the future. + +APP 2025-04-04T20:09:31.451Z --> sent request | {"jsonrpc":"2.0","id":417,"result":{"isIncomplete":false,"items":[{"label":"VM has 14GB of RAM and the host has 16GB of RAM. I also tried to increase the VM memory to 16GB, but it didn't help. I also tried to increase the swap space on the host, but that didn't help either. I am not sure whether this is a bug in `ubench` or a limitation of Bhyve. I will investigate this further in the future.","kind":1,"preselect":true,"detail":"VM has 14GB of RAM and the host has 16GB of RAM. I also tried to increase the VM memory to 16GB, but it didn't help. I also tried to increase the swap space on the host, but that didn't help either. I am not sure whether this is a bug in `ubench` or a limitation of Bhyve. I will investigate this further in the future.","insertText":"VM has 14GB of RAM and the host has 16GB of RAM. I also tried to increase the VM memory to 16GB, but it didn't help. I also tried to increase the swap space on the host, but that didn't help either. I am not sure whether this is a bug in `ubench` or a limitation of Bhyve. I will investigate this further in the future.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":535},"end":{"line":443,"character":577}}}]}]}} + +APP 2025-04-04T20:09:31.451Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:09:31.480Z --> received didChange | language: markdown | contentVersion: 2033 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:31.485Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":219,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":418} + +APP 2025-04-04T20:09:31.683Z --> received didChange | language: markdown | contentVersion: 2034 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:31.685Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why as the VM h, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2034} + +APP 2025-04-04T20:09:31.685Z --> skipping because content is stale + +APP 2025-04-04T20:09:31.685Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:09:31.685Z --> sent request | {"jsonrpc":"2.0","id":418,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:09:31.735Z --> received didChange | language: markdown | contentVersion: 2035 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:31.814Z --> received didChange | language: markdown | contentVersion: 2036 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:31.908Z --> received didChange | language: markdown | contentVersion: 2037 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:31.913Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":223,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":419} + +APP 2025-04-04T20:09:32.048Z --> received didChange | language: markdown | contentVersion: 2038 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:32.114Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why as the VM has g, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2038} + +APP 2025-04-04T20:09:32.115Z --> skipping because content is stale + +APP 2025-04-04T20:09:32.115Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:09:32.115Z --> sent request | {"jsonrpc":"2.0","id":419,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:09:32.149Z --> received didChange | language: markdown | contentVersion: 2039 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:32.256Z --> received didChange | language: markdown | contentVersion: 2040 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:32.359Z --> received didChange | language: markdown | contentVersion: 2041 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:09:32.365Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":227,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":420} + +APP 2025-04-04T20:09:32.565Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why as the VM has got , but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2041} + +APP 2025-04-04T20:09:32.566Z --> calling completion event + +APP 2025-04-04T20:09:32.566Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:09:32.566Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:09:32.566Z --> copilot | completion request + +APP 2025-04-04T20:09:32.566Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:09:33.311Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:09:33.312Z --> completion hints: | 14GB of RAM, and the host has 16GB of RAM. I also tried to increase the swap space on the VM to 8GB, but it didn't help. I think this is a bug in `ubench` or a problem with the Bhyve VM configuration. I will investigate this further in the future. + +APP 2025-04-04T20:09:33.312Z --> sent request | {"jsonrpc":"2.0","id":420,"result":{"isIncomplete":false,"items":[{"label":"14GB of RAM, and the host has 16GB of RAM. I also tried to increase the swap space on the VM to 8GB, but it didn't help. I think this is a bug in `ubench` or a problem with the Bhyve VM configuration. I will investigate this further in the future.","kind":1,"preselect":true,"detail":"14GB of RAM, and the host has 16GB of RAM. I also tried to increase the swap space on the VM to 8GB, but it didn't help. I think this is a bug in `ubench` or a problem with the Bhyve VM configuration. I will investigate this further in the future.","insertText":"14GB of RAM, and the host has 16GB of RAM. I also tried to increase the swap space on the VM to 8GB, but it didn't help. I think this is a bug in `ubench` or a problem with the Bhyve VM configuration. I will investigate this further in the future.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":474},"end":{"line":443,"character":516}}}]}]}} + +APP 2025-04-04T20:09:33.312Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:09:39.714Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:09:40.713Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:09:42.567Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:11:09.874Z --> received didChange | language: markdown | contentVersion: 2042 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:11:10.134Z --> received didChange | language: markdown | contentVersion: 2043 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:11:10.385Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":229,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":421} + +APP 2025-04-04T20:11:10.514Z --> failed to parse line: | failed to parse | Content-Length: 19001
+
+ + +APP 2025-04-04T20:11:10.515Z --> received didChange | language: markdown | contentVersion: 2044 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:11:10.587Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why as the VM has got jjk, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2044} + +APP 2025-04-04T20:11:10.587Z --> skipping because content is stale + +APP 2025-04-04T20:11:10.588Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:11:10.588Z --> sent request | {"jsonrpc":"2.0","id":421,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:11:11.014Z --> received didChange | language: markdown | contentVersion: 2045 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:11:11.265Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":231,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":422} + +APP 2025-04-04T20:11:11.467Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 12GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why as the VM has got jjkk, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2045} + +APP 2025-04-04T20:11:11.467Z --> calling completion event + +APP 2025-04-04T20:11:11.468Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:11:11.468Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:11:11.468Z --> copilot | completion request + +APP 2025-04-04T20:11:11.468Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:11:12.436Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:11:12.438Z --> completion hints: | ust 14GB of RAM, and the host has 32GB of RAM. I suspect that the Bhyve VM is not able to use all the memory available on the host system. I will have to investigate this further. The FreeBSD VM running on Bhyve was also limited to 12GB of RAM, so it should have been enough. + +APP 2025-04-04T20:11:12.438Z --> sent request | {"jsonrpc":"2.0","id":422,"result":{"isIncomplete":false,"items":[{"label":"ust 14GB of RAM, and the host has 32GB of RAM. I suspect that the Bhyve VM is not able to use all the memory available on the host system. I will have to investigate this further. The FreeBSD VM running on Bhyve was also limited to 12GB of RAM, so it should have been enough.","kind":1,"preselect":true,"detail":"ust 14GB of RAM, and the host has 32GB of RAM. I suspect that the Bhyve VM is not able to use all the memory available on the host system. I will have to investigate this further. The FreeBSD VM running on Bhyve was also limited to 12GB of RAM, so it should have been enough.","insertText":"ust 14GB of RAM, and the host has 32GB of RAM. I suspect that the Bhyve VM is not able to use all the memory available on the host system. I will have to investigate this further. The FreeBSD VM running on Bhyve was also limited to 12GB of RAM, so it should have been enough.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":506},"end":{"line":443,"character":548}}}]}]}} + +APP 2025-04-04T20:11:12.438Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:11:21.469Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:11:23.641Z --> received didChange | language: markdown | contentVersion: 2046 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:11:23.809Z --> received didChange | language: markdown | contentVersion: 2047 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:11:23.980Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":164,"line":350},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":423} + +APP 2025-04-04T20:11:24.182Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s' and we let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM run through with almost identical results like on the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why as the VM has got jjkk, but I am not further investigating here.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs)\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2047} + +APP 2025-04-04T20:11:24.182Z --> calling completion event + +APP 2025-04-04T20:11:24.183Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":350,"character":0},"end":{"line":351,"character":0}}}] + +APP 2025-04-04T20:11:24.183Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":350,"character":0},"end":{"line":351,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:11:24.183Z --> copilot | completion request + +APP 2025-04-04T20:11:24.183Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:11:24.403Z --> received didChange | language: markdown | contentVersion: 2048 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:11:24.621Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:11:24.622Z --> completion hints: | 14GB RAM): + +APP 2025-04-04T20:11:24.622Z --> sent request | {"jsonrpc":"2.0","id":423,"result":{"isIncomplete":false,"items":[{"label":"14GB RAM):","kind":1,"preselect":true,"detail":"14GB RAM):","insertText":"14GB RAM):","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":350,"character":174},"end":{"line":350,"character":230}}}]}]}} + +APP 2025-04-04T20:11:24.622Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:11:34.185Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:11:36.875Z --> received didChange | language: markdown | contentVersion: 2049 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:11:48.832Z --> received didChange | language: markdown | contentVersion: 2050 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:11:53.191Z --> received didChange | language: markdown | contentVersion: 2051 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:12:01.596Z --> received didChange | language: markdown | contentVersion: 2052 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:12:04.672Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":261,"line":443},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":424} + +APP 2025-04-04T20:12:04.874Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran through with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out of swap space errors. I am not exactly sure why, as the VM has jjkk, but I am not investigating further. Also, during the benchmark, I noticed the `bhyve` process on the host was constantly at `399%` CPU usage (using all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nSo overall, there is a all overhead of Bhyve, but the CPU performance difference it is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems off a little bit more, but I am not sure whether to trust this benchmark or not. For my use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows VM or a NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different kinds of workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2052} + +APP 2025-04-04T20:12:04.875Z --> calling completion event + +APP 2025-04-04T20:12:04.875Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}}}] + +APP 2025-04-04T20:12:04.875Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":443,"character":0},"end":{"line":444,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:12:04.875Z --> copilot | completion request + +APP 2025-04-04T20:12:04.875Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:12:04.919Z --> received didChange | language: markdown | contentVersion: 2053 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:12:05.072Z --> received didChange | language: markdown | contentVersion: 2054 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:12:05.889Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:12:05.890Z --> completion hints: | The FreeBSD host has 14GB of RAM, and the VM has 8GB of RAM. The FreeBSD host is using 78MB of RAM, and the VM is using 14GB of RAM. So there should be enough memory available. + +APP 2025-04-04T20:12:05.890Z --> sent request | {"jsonrpc":"2.0","id":424,"result":{"isIncomplete":false,"items":[{"label":"The FreeBSD host has 14GB of RAM, and the VM has 8GB of RAM. The FreeBSD host is using 78MB of RAM, and the VM is using 14GB of RAM. So there should be enough memory available.","kind":1,"preselect":true,"detail":"The FreeBSD host has 14GB of RAM, and the VM has 8GB of RAM. The FreeBSD host is using 78MB of RAM, and the VM is using 14GB of RAM. So there should be enough memory available.","insertText":"The FreeBSD host has 14GB of RAM, and the VM has 8GB of RAM. The FreeBSD host is using 78MB of RAM, and the VM is using 14GB of RAM. So there should be enough memory available.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":443,"character":437},"end":{"line":443,"character":561}}}]}]}} + +APP 2025-04-04T20:12:05.890Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:12:11.940Z --> received didChange | language: markdown | contentVersion: 2055 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:12:14.876Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:12:16.930Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/codeAction","params":{"context":{"diagnostics":[{"code":"Vale.Spelling","data":{"Action":{"Name":"suggest","Params":["spellings"]},"Check":"Vale.Spelling","Description":"","Line":453,"Link":"","Match":"Bhyve","Message":"Did you really mean 'Bhyve'?","Severity":"error","Span":[44,48]},"message":"Did you really mean 'Bhyve'?","range":{"end":{"character":48,"line":452},"start":{"character":43,"line":452}},"severity":1,"source":"vale-ls"},{"code":"Vale.Spelling","data":{"Action":{"Name":"suggest","Params":["spellings"]},"Check":"Vale.Spelling","Description":"","Line":453,"Link":"","Match":"Bhyve","Message":"Did you really mean 'Bhyve'?","Severity":"error","Span":[168,172]},"message":"Did you really mean 'Bhyve'?","range":{"end":{"character":172,"line":452},"start":{"character":167,"line":452}},"severity":1,"source":"vale-ls"},{"code":"Microsoft.FirstPerson","data":{"Action":{"Name":"","Params":null},"Check":"Microsoft.FirstPerson","Description":"","Line":453,"Link":"https://docs.microsoft.com/en-us/style-guide/grammar/person","Match":"my","Message":"Use first person (such as 'my') sparingly.","Severity":"warning","Span":[214,215]},"message":"Use first person (such as 'my') sparingly.","range":{"end":{"character":215,"line":452},"start":{"character":213,"line":452}},"severity":2,"source":"vale-ls"},{"code":"Microsoft.FirstPerson","data":{"Action":{"Name":"","Params":null},"Check":"Microsoft.FirstPerson","Description":"","Line":453,"Link":"https://docs.microsoft.com/en-us/style-guide/grammar/person","Match":" I ","Message":"Use first person (such as ' I ') sparingly.","Severity":"warning","Span":[269,271]},"message":"Use first person (such as ' I ') sparingly.","range":{"end":{"character":271,"line":452},"start":{"character":268,"line":452}},"severity":2,"source":"vale-ls"},{"code":"Microsoft.FirstPerson","data":{"Action":{"Name":"","Params":null},"Check":"Microsoft.FirstPerson","Description":"","Line":453,"Link":"https://docs.microsoft.com/en-us/style-guide/grammar/person","Match":"my","Message":"Use first person (such as 'my') sparingly.","Severity":"warning","Span":[316,317]},"message":"Use first person (such as 'my') sparingly.","range":{"end":{"character":317,"line":452},"start":{"character":315,"line":452}},"severity":2,"source":"vale-ls"}],"triggerKind":1},"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":425} + +APP 2025-04-04T20:12:16.930Z --> sent request | {"jsonrpc":"2.0","id":425,"result":[{"title":"Resolve diagnostics","kind":"quickfix","diagnostics":[],"command":{"title":"Resolve diagnostics","command":"resolveDiagnostics","arguments":[{"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"query":"Resolve the diagnostics for this code.","diagnostics":["Did you really mean 'Bhyve'?","Did you really mean 'Bhyve'?","Use first person (such as 'my') sparingly.","Use first person (such as ' I ') sparingly.","Use first person (such as 'my') sparingly."]}]}},{"title":"Generate documentation","kind":"quickfix","diagnostics":[],"command":{"title":"Generate documentation","command":"generateDocs","arguments":[{"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"query":"Add documentation to this code.","diagnostics":["Did you really mean 'Bhyve'?","Did you really mean 'Bhyve'?","Use first person (such as 'my') sparingly.","Use first person (such as ' I ') sparingly.","Use first person (such as 'my') sparingly."]}]}},{"title":"Improve code","kind":"quickfix","diagnostics":[],"command":{"title":"Improve code","command":"improveCode","arguments":[{"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"query":"Improve this code.","diagnostics":["Did you really mean 'Bhyve'?","Did you really mean 'Bhyve'?","Use first person (such as 'my') sparingly.","Use first person (such as ' I ') sparingly.","Use first person (such as 'my') sparingly."]}]}},{"title":"Refactor code from a comment","kind":"quickfix","diagnostics":[],"command":{"title":"Refactor code from a comment","command":"refactorFromComment","arguments":[{"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"query":"Refactor this code based on the comment.","diagnostics":["Did you really mean 'Bhyve'?","Did you really mean 'Bhyve'?","Use first person (such as 'my') sparingly.","Use first person (such as ' I ') sparingly.","Use first person (such as 'my') sparingly."]}]}},{"title":"Write a unit test","kind":"quickfix","diagnostics":[],"command":{"title":"Write a unit test","command":"writeTest","arguments":[{"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"query":"Write a unit test for this code. Do not include any imports.","diagnostics":["Did you really mean 'Bhyve'?","Did you really mean 'Bhyve'?","Use first person (such as 'my') sparingly.","Use first person (such as ' I ') sparingly.","Use first person (such as 'my') sparingly."]}]}}]} + +APP 2025-04-04T20:12:24.062Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/codeAction","params":{"context":{"diagnostics":[{"code":"Vale.Spelling","data":{"Action":{"Name":"suggest","Params":["spellings"]},"Check":"Vale.Spelling","Description":"","Line":453,"Link":"","Match":"Bhyve","Message":"Did you really mean 'Bhyve'?","Severity":"error","Span":[44,48]},"message":"Did you really mean 'Bhyve'?","range":{"end":{"character":48,"line":452},"start":{"character":43,"line":452}},"severity":1,"source":"vale-ls"},{"code":"Vale.Spelling","data":{"Action":{"Name":"suggest","Params":["spellings"]},"Check":"Vale.Spelling","Description":"","Line":453,"Link":"","Match":"Bhyve","Message":"Did you really mean 'Bhyve'?","Severity":"error","Span":[168,172]},"message":"Did you really mean 'Bhyve'?","range":{"end":{"character":172,"line":452},"start":{"character":167,"line":452}},"severity":1,"source":"vale-ls"},{"code":"Microsoft.FirstPerson","data":{"Action":{"Name":"","Params":null},"Check":"Microsoft.FirstPerson","Description":"","Line":453,"Link":"https://docs.microsoft.com/en-us/style-guide/grammar/person","Match":"my","Message":"Use first person (such as 'my') sparingly.","Severity":"warning","Span":[214,215]},"message":"Use first person (such as 'my') sparingly.","range":{"end":{"character":215,"line":452},"start":{"character":213,"line":452}},"severity":2,"source":"vale-ls"},{"code":"Microsoft.FirstPerson","data":{"Action":{"Name":"","Params":null},"Check":"Microsoft.FirstPerson","Description":"","Line":453,"Link":"https://docs.microsoft.com/en-us/style-guide/grammar/person","Match":" I ","Message":"Use first person (such as ' I ') sparingly.","Severity":"warning","Span":[269,271]},"message":"Use first person (such as ' I ') sparingly.","range":{"end":{"character":271,"line":452},"start":{"character":268,"line":452}},"severity":2,"source":"vale-ls"},{"code":"Microsoft.FirstPerson","data":{"Action":{"Name":"","Params":null},"Check":"Microsoft.FirstPerson","Description":"","Line":453,"Link":"https://docs.microsoft.com/en-us/style-guide/grammar/person","Match":"my","Message":"Use first person (such as 'my') sparingly.","Severity":"warning","Span":[316,317]},"message":"Use first person (such as 'my') sparingly.","range":{"end":{"character":317,"line":452},"start":{"character":315,"line":452}},"severity":2,"source":"vale-ls"}],"triggerKind":1},"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":426} + +APP 2025-04-04T20:12:24.063Z --> sent request | {"jsonrpc":"2.0","id":426,"result":[{"title":"Resolve diagnostics","kind":"quickfix","diagnostics":[],"command":{"title":"Resolve diagnostics","command":"resolveDiagnostics","arguments":[{"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"query":"Resolve the diagnostics for this code.","diagnostics":["Did you really mean 'Bhyve'?","Did you really mean 'Bhyve'?","Use first person (such as 'my') sparingly.","Use first person (such as ' I ') sparingly.","Use first person (such as 'my') sparingly."]}]}},{"title":"Generate documentation","kind":"quickfix","diagnostics":[],"command":{"title":"Generate documentation","command":"generateDocs","arguments":[{"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"query":"Add documentation to this code.","diagnostics":["Did you really mean 'Bhyve'?","Did you really mean 'Bhyve'?","Use first person (such as 'my') sparingly.","Use first person (such as ' I ') sparingly.","Use first person (such as 'my') sparingly."]}]}},{"title":"Improve code","kind":"quickfix","diagnostics":[],"command":{"title":"Improve code","command":"improveCode","arguments":[{"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"query":"Improve this code.","diagnostics":["Did you really mean 'Bhyve'?","Did you really mean 'Bhyve'?","Use first person (such as 'my') sparingly.","Use first person (such as ' I ') sparingly.","Use first person (such as 'my') sparingly."]}]}},{"title":"Refactor code from a comment","kind":"quickfix","diagnostics":[],"command":{"title":"Refactor code from a comment","command":"refactorFromComment","arguments":[{"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"query":"Refactor this code based on the comment.","diagnostics":["Did you really mean 'Bhyve'?","Did you really mean 'Bhyve'?","Use first person (such as 'my') sparingly.","Use first person (such as ' I ') sparingly.","Use first person (such as 'my') sparingly."]}]}},{"title":"Write a unit test","kind":"quickfix","diagnostics":[],"command":{"title":"Write a unit test","command":"writeTest","arguments":[{"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"query":"Write a unit test for this code. Do not include any imports.","diagnostics":["Did you really mean 'Bhyve'?","Did you really mean 'Bhyve'?","Use first person (such as 'my') sparingly.","Use first person (such as ' I ') sparingly.","Use first person (such as 'my') sparingly."]}]}}]} + +APP 2025-04-04T20:12:27.150Z --> received request: | {"jsonrpc":"2.0","method":"workspace/executeCommand","params":{"arguments":[{"diagnostics":["Did you really mean 'Bhyve'?","Did you really mean 'Bhyve'?","Use first person (such as 'my') sparingly.","Use first person (such as ' I ') sparingly.","Use first person (such as 'my') sparingly."],"query":"Resolve the diagnostics for this code.","range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}}}],"command":"resolveDiagnostics"},"id":427} + +APP 2025-04-04T20:12:27.151Z --> sending diagnostics | [{"message":"Executing resolveDiagnostics...","range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"severity":3}] + +APP 2025-04-04T20:12:27.151Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Executing resolveDiagnostics...","range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"severity":3,"source":"helix-gpt"}]}} + +APP 2025-04-04T20:12:27.151Z --> getting content from range | {"end":{"character":0,"line":453},"start":{"character":0,"line":452}} | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl | current buffers: ["file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"] + +APP 2025-04-04T20:12:27.152Z --> chat request content: | So overall, there is a small overhead with Bhyve, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems a bit off, but I am not sure whether to trust it or not. For my use cases, it will still be fast enough! + +APP 2025-04-04T20:12:27.152Z --> copilot | chat request | ["Resolve the diagnostics for this code.\n\nDiagnostics: Did you really mean 'Bhyve'?\n- Did you really mean 'Bhyve'?\n- Use first person (such as 'my') sparingly.\n- Use first person (such as ' I ') sparingly.\n- Use first person (such as 'my') sparingly.","So overall, there is a small overhead with Bhyve, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems a bit off, but I am not sure whether to trust it or not. For my use cases, it will still be fast enough!","file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","markdown"] + +APP 2025-04-04T20:12:27.153Z --> fetch | /chat/completions + +APP 2025-04-04T20:12:32.380Z --> response | https://api.githubcopilot.com/chat/completions | 200 + +APP 2025-04-04T20:12:32.381Z --> received chat result: | So overall, there is a small overhead with bhyve, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on bhyve, but the difference is small enough for typical use cases. The memory benchmark seems a bit off, but it is uncertain whether to trust it or not. For most use cases, it will still be fast enough! + + +APP 2025-04-04T20:12:32.381Z --> sent request | {"jsonrpc":"2.0","method":"workspace/applyEdit","id":427,"params":{"label":"resolveDiagnostics","edit":{"changes":{"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl":[{"range":{"end":{"character":0,"line":453},"start":{"character":0,"line":452}},"newText":"So overall, there is a small overhead with bhyve, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on bhyve, but the difference is small enough for typical use cases. The memory benchmark seems a bit off, but it is uncertain whether to trust it or not. For most use cases, it will still be fast enough!\n"}]}}}} + +APP 2025-04-04T20:12:32.381Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:12:32.390Z --> received request: | {"jsonrpc":"2.0","result":{"applied":true},"id":427} + +APP 2025-04-04T20:12:35.554Z --> received didChange | language: markdown | contentVersion: 2057 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:12:37.152Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:15:59.349Z --> received didChange | language: markdown | contentVersion: 2058 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:00.955Z --> received didChange | language: markdown | contentVersion: 2059 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:18.481Z --> received didChange | language: markdown | contentVersion: 2060 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:32.472Z --> received didChange | language: markdown | contentVersion: 2061 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:32.477Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":310,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":428} + +APP 2025-04-04T20:16:32.639Z --> received didChange | language: markdown | contentVersion: 2062 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:32.644Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":"(","triggerKind":2},"position":{"character":311,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":429} + +APP 2025-04-04T20:16:32.846Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. For my use cases (), it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2062} + +APP 2025-04-04T20:16:32.846Z --> calling completion event + +APP 2025-04-04T20:16:32.846Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:16:32.846Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:16:32.846Z --> copilot | completion request + +APP 2025-04-04T20:16:32.847Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:16:32.987Z --> received didChange | language: markdown | contentVersion: 2063 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:33.166Z --> received didChange | language: markdown | contentVersion: 2064 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:33.312Z --> received didChange | language: markdown | contentVersion: 2065 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:33.463Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:16:33.463Z --> completion hints: | e.g., running k3s), the performance is good enough. + +APP 2025-04-04T20:16:33.464Z --> sent request | {"jsonrpc":"2.0","id":429,"result":{"isIncomplete":false,"items":[{"label":"e.g., running k3s), the performance is good enough.","kind":1,"preselect":true,"detail":"e.g., running k3s), the performance is good enough.","insertText":"e.g., running k3s), the performance is good enough.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":362},"end":{"line":452,"character":394}}}]}]}} + +APP 2025-04-04T20:16:33.464Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:16:33.504Z --> received didChange | language: markdown | contentVersion: 2066 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:33.672Z --> received didChange | language: markdown | contentVersion: 2067 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:33.723Z --> received didChange | language: markdown | contentVersion: 2068 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:33.789Z --> received didChange | language: markdown | contentVersion: 2069 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:33.878Z --> received didChange | language: markdown | contentVersion: 2070 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:33.883Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":319,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":430} + +APP 2025-04-04T20:16:33.969Z --> received didChange | language: markdown | contentVersion: 2071 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:34.085Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. For my use cases (looking a), it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2071} + +APP 2025-04-04T20:16:34.086Z --> skipping because content is stale + +APP 2025-04-04T20:16:34.086Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:16:34.086Z --> sent request | {"jsonrpc":"2.0","id":430,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:16:34.132Z --> received didChange | language: markdown | contentVersion: 2072 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:34.236Z --> received didChange | language: markdown | contentVersion: 2073 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:34.242Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":322,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":431} + +APP 2025-04-04T20:16:34.332Z --> received didChange | language: markdown | contentVersion: 2074 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:34.411Z --> received didChange | language: markdown | contentVersion: 2075 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:34.443Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. For my use cases (looking at th), it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2075} + +APP 2025-04-04T20:16:34.443Z --> skipping because content is stale + +APP 2025-04-04T20:16:34.443Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:16:34.444Z --> sent request | {"jsonrpc":"2.0","id":431,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:16:34.462Z --> received didChange | language: markdown | contentVersion: 2076 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:34.606Z --> received didChange | language: markdown | contentVersion: 2077 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:34.611Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":326,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":432} + +APP 2025-04-04T20:16:34.708Z --> received didChange | language: markdown | contentVersion: 2078 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:34.764Z --> received didChange | language: markdown | contentVersion: 2079 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:34.813Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. For my use cases (looking at the si), it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2079} + +APP 2025-04-04T20:16:34.813Z --> skipping because content is stale + +APP 2025-04-04T20:16:34.813Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:16:34.813Z --> sent request | {"jsonrpc":"2.0","id":432,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:16:34.825Z --> received didChange | language: markdown | contentVersion: 2080 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:34.947Z --> received didChange | language: markdown | contentVersion: 2081 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:35.076Z --> received didChange | language: markdown | contentVersion: 2082 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:35.109Z --> received didChange | language: markdown | contentVersion: 2083 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:35.270Z --> received didChange | language: markdown | contentVersion: 2084 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:35.278Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":333,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":433} + +APP 2025-04-04T20:16:35.479Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. For my use cases (looking at the single ), it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2084} + +APP 2025-04-04T20:16:35.479Z --> calling completion event + +APP 2025-04-04T20:16:35.479Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:16:35.479Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:16:35.480Z --> copilot | completion request + +APP 2025-04-04T20:16:35.480Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:16:35.804Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:16:35.805Z --> completion hints: | CPU benchmark), Bhyve is a good choice. + +APP 2025-04-04T20:16:35.805Z --> sent request | {"jsonrpc":"2.0","id":433,"result":{"isIncomplete":false,"items":[{"label":"CPU benchmark), Bhyve is a good choice.","kind":1,"preselect":true,"detail":"CPU benchmark), Bhyve is a good choice.","insertText":"CPU benchmark), Bhyve is a good choice.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":372},"end":{"line":452,"character":404}}}]}]}} + +APP 2025-04-04T20:16:35.805Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:16:36.135Z --> received didChange | language: markdown | contentVersion: 2085 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:36.252Z --> received didChange | language: markdown | contentVersion: 2086 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:36.481Z --> received didChange | language: markdown | contentVersion: 2087 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:36.570Z --> received didChange | language: markdown | contentVersion: 2088 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:36.576Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":337,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":434} + +APP 2025-04-04T20:16:36.778Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. For my use cases (looking at the single CPU ), it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2088} + +APP 2025-04-04T20:16:36.778Z --> calling completion event + +APP 2025-04-04T20:16:36.778Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:16:36.778Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:16:36.778Z --> copilot | completion request + +APP 2025-04-04T20:16:36.778Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:16:37.046Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:16:37.047Z --> completion hints: | benchmark), Bhyve is a good choice. + +APP 2025-04-04T20:16:37.047Z --> sent request | {"jsonrpc":"2.0","id":434,"result":{"isIncomplete":false,"items":[{"label":"benchmark), Bhyve is a good choice.","kind":1,"preselect":true,"detail":"benchmark), Bhyve is a good choice.","insertText":"benchmark), Bhyve is a good choice.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":372},"end":{"line":452,"character":404}}}]}]}} + +APP 2025-04-04T20:16:37.047Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:16:37.143Z --> received didChange | language: markdown | contentVersion: 2089 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:37.232Z --> received didChange | language: markdown | contentVersion: 2090 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:37.328Z --> received didChange | language: markdown | contentVersion: 2091 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:37.483Z --> received didChange | language: markdown | contentVersion: 2092 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:37.658Z --> received didChange | language: markdown | contentVersion: 2093 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:37.820Z --> received didChange | language: markdown | contentVersion: 2094 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:37.889Z --> received didChange | language: markdown | contentVersion: 2095 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:37.894Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":344,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":435} + +APP 2025-04-04T20:16:37.966Z --> received didChange | language: markdown | contentVersion: 2096 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:38.040Z --> received didChange | language: markdown | contentVersion: 2097 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:38.095Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. For my use cases (looking at the single CPU memory te), it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2097} + +APP 2025-04-04T20:16:38.095Z --> skipping because content is stale + +APP 2025-04-04T20:16:38.095Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:16:38.095Z --> sent request | {"jsonrpc":"2.0","id":435,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:16:38.259Z --> received didChange | language: markdown | contentVersion: 2098 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:38.463Z --> received didChange | language: markdown | contentVersion: 2099 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:38.666Z --> received didChange | language: markdown | contentVersion: 2100 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:42.847Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:16:45.479Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:16:46.778Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:16:49.166Z --> received didChange | language: markdown | contentVersion: 2101 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:55.122Z --> received didChange | language: markdown | contentVersion: 2102 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:55.892Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":297,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":436} + +APP 2025-04-04T20:16:56.094Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. For use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2102} + +APP 2025-04-04T20:16:56.095Z --> calling completion event + +APP 2025-04-04T20:16:56.095Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:16:56.095Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:16:56.095Z --> copilot | completion request + +APP 2025-04-04T20:16:56.095Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:16:56.286Z --> received didChange | language: markdown | contentVersion: 2103 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:56.345Z --> received didChange | language: markdown | contentVersion: 2104 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:56.399Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:16:56.400Z --> completion hints: | the CPU benchmark, I am confident that the results are valid. + +APP 2025-04-04T20:16:56.400Z --> sent request | {"jsonrpc":"2.0","id":436,"result":{"isIncomplete":false,"items":[{"label":"the CPU benchmark, I am confident that the results are valid.","kind":1,"preselect":true,"detail":"the CPU benchmark, I am confident that the results are valid.","insertText":"the CPU benchmark, I am confident that the results are valid.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":358},"end":{"line":452,"character":398}}}]}]}} + +APP 2025-04-04T20:16:56.400Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:16:56.448Z --> received didChange | language: markdown | contentVersion: 2105 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:56.606Z --> received didChange | language: markdown | contentVersion: 2106 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:16:56.611Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":301,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":437} + +APP 2025-04-04T20:16:56.813Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for my use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. For our use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2106} + +APP 2025-04-04T20:16:56.813Z --> calling completion event + +APP 2025-04-04T20:16:56.813Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:16:56.813Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:16:56.813Z --> copilot | completion request + +APP 2025-04-04T20:16:56.814Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:16:57.115Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:16:57.116Z --> completion hints: | use cases, the CPU performance is more important than the memory performance. + +APP 2025-04-04T20:16:57.116Z --> sent request | {"jsonrpc":"2.0","id":437,"result":{"isIncomplete":false,"items":[{"label":"use cases, the CPU performance is more important than the memory performance.","kind":1,"preselect":true,"detail":"use cases, the CPU performance is more important than the memory performance.","insertText":"use cases, the CPU performance is more important than the memory performance.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":378},"end":{"line":452,"character":418}}}]}]}} + +APP 2025-04-04T20:16:57.116Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:01.085Z --> received didChange | language: markdown | contentVersion: 2107 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:01.214Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":200,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":438} + +APP 2025-04-04T20:17:01.416Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. For our use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2107} + +APP 2025-04-04T20:17:01.416Z --> calling completion event + +APP 2025-04-04T20:17:01.416Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:17:01.416Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:17:01.417Z --> copilot | completion request + +APP 2025-04-04T20:17:01.417Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:17:01.438Z --> received didChange | language: markdown | contentVersion: 2108 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:01.523Z --> received didChange | language: markdown | contentVersion: 2109 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:01.588Z --> received didChange | language: markdown | contentVersion: 2110 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:01.801Z --> failed to parse line: | failed to parse | Content-Length: 18920
+
+ + +APP 2025-04-04T20:17:01.802Z --> received didChange | language: markdown | contentVersion: 2111 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:01.808Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":204,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":439} + +APP 2025-04-04T20:17:02.008Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. For our use cases, it will still be fast enough!\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2111} + +APP 2025-04-04T20:17:02.008Z --> calling completion event + +APP 2025-04-04T20:17:02.008Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:17:02.009Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:17:02.009Z --> copilot | completion request + +APP 2025-04-04T20:17:02.009Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:17:02.009Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:17:02.009Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:17:02.543Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:17:02.544Z --> completion hints: | purposes. + +APP 2025-04-04T20:17:02.544Z --> sent request | {"jsonrpc":"2.0","id":439,"result":{"isIncomplete":false,"items":[{"label":"purposes.","kind":1,"preselect":true,"detail":"purposes.","insertText":"purposes.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":213},"end":{"line":452,"character":351}}}]}]}} + +APP 2025-04-04T20:17:02.544Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:06.096Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:06.814Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:08.059Z --> received didChange | language: markdown | contentVersion: 2112 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:08.151Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":294,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":440} + +APP 2025-04-04T20:17:08.353Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2112} + +APP 2025-04-04T20:17:08.353Z --> calling completion event + +APP 2025-04-04T20:17:08.353Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:17:08.353Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:17:08.353Z --> copilot | completion request + +APP 2025-04-04T20:17:08.353Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:17:08.593Z --> received didChange | language: markdown | contentVersion: 2113 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:08.686Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:17:08.687Z --> completion hints: | + +APP 2025-04-04T20:17:08.687Z --> sent request | {"jsonrpc":"2.0","id":440,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":294},"end":{"line":452,"character":294}}}]}]}} + +APP 2025-04-04T20:17:08.687Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:08.701Z --> received didChange | language: markdown | contentVersion: 2114 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:08.814Z --> received didChange | language: markdown | contentVersion: 2115 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:08.820Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":297,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":441} + +APP 2025-04-04T20:17:08.955Z --> received didChange | language: markdown | contentVersion: 2116 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:09.021Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do y\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2116} + +APP 2025-04-04T20:17:09.021Z --> skipping because content is stale + +APP 2025-04-04T20:17:09.021Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:09.021Z --> sent request | {"jsonrpc":"2.0","id":441,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:17:09.137Z --> received didChange | language: markdown | contentVersion: 2117 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:09.205Z --> received didChange | language: markdown | contentVersion: 2118 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:09.299Z --> received didChange | language: markdown | contentVersion: 2119 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:09.305Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":301,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":442} + +APP 2025-04-04T20:17:09.463Z --> received didChange | language: markdown | contentVersion: 2120 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:09.482Z --> received didChange | language: markdown | contentVersion: 2121 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:09.507Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you ha\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2121} + +APP 2025-04-04T20:17:09.507Z --> skipping because content is stale + +APP 2025-04-04T20:17:09.507Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:09.507Z --> sent request | {"jsonrpc":"2.0","id":442,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:17:09.752Z --> received didChange | language: markdown | contentVersion: 2122 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:09.846Z --> received didChange | language: markdown | contentVersion: 2123 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:09.853Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":305,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":443} + +APP 2025-04-04T20:17:10.054Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you hae \n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2123} + +APP 2025-04-04T20:17:10.055Z --> calling completion event + +APP 2025-04-04T20:17:10.055Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}}}] + +APP 2025-04-04T20:17:10.055Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":452,"character":0},"end":{"line":453,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:17:10.055Z --> copilot | completion request + +APP 2025-04-04T20:17:10.055Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:17:10.151Z --> received didChange | language: markdown | contentVersion: 2124 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:10.321Z --> received didChange | language: markdown | contentVersion: 2125 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:10.377Z --> received didChange | language: markdown | contentVersion: 2126 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:10.421Z --> received didChange | language: markdown | contentVersion: 2127 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:10.530Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:17:10.531Z --> completion hints: | any idea why this is the case? I am not sure whether this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post. + +APP 2025-04-04T20:17:10.531Z --> sent request | {"jsonrpc":"2.0","id":443,"result":{"isIncomplete":false,"items":[{"label":"any idea why this is the case? I am not sure whether this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","kind":1,"preselect":true,"detail":"any idea why this is the case? I am not sure whether this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","insertText":"any idea why this is the case? I am not sure whether this is a Bhyve issue or a Rocky Linux issue. I will investigate this further in the next blog post.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":452,"character":458},"end":{"line":452,"character":458}}}]}]}} + +APP 2025-04-04T20:17:10.531Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:10.560Z --> received didChange | language: markdown | contentVersion: 2128 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:10.566Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":306,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":444} + +APP 2025-04-04T20:17:10.690Z --> received didChange | language: markdown | contentVersion: 2129 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:10.767Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have a\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2129} + +APP 2025-04-04T20:17:10.768Z --> skipping because content is stale + +APP 2025-04-04T20:17:10.768Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:10.768Z --> sent request | {"jsonrpc":"2.0","id":444,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:17:10.797Z --> received didChange | language: markdown | contentVersion: 2130 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:10.885Z --> received didChange | language: markdown | contentVersion: 2131 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:10.890Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":309,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":445} + +APP 2025-04-04T20:17:11.012Z --> received didChange | language: markdown | contentVersion: 2132 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:11.092Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an i\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2132} + +APP 2025-04-04T20:17:11.092Z --> skipping because content is stale + +APP 2025-04-04T20:17:11.092Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:11.092Z --> sent request | {"jsonrpc":"2.0","id":445,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:17:11.157Z --> received didChange | language: markdown | contentVersion: 2133 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:11.198Z --> received didChange | language: markdown | contentVersion: 2134 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:11.390Z --> received didChange | language: markdown | contentVersion: 2135 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:11.418Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:11.641Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":313,"line":452},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":446} + +APP 2025-04-04T20:17:11.733Z --> received didChange | language: markdown | contentVersion: 2136 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:17:11.842Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2136} + +APP 2025-04-04T20:17:11.842Z --> skipping because content is stale + +APP 2025-04-04T20:17:11.843Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:11.843Z --> sent request | {"jsonrpc":"2.0","id":446,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:17:12.009Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:12.009Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:18.354Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:17:20.055Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:04.313Z --> received didChange | language: markdown | contentVersion: 2137 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:04.483Z --> received didChange | language: markdown | contentVersion: 2138 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:05.016Z --> received didChange | language: markdown | contentVersion: 2139 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:05.175Z --> received didChange | language: markdown | contentVersion: 2140 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:05.427Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":2,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":447} + +APP 2025-04-04T20:18:05.442Z --> received didChange | language: markdown | contentVersion: 2141 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:05.448Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":3,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":448} + +APP 2025-04-04T20:18:05.649Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSe \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2141} + +APP 2025-04-04T20:18:05.649Z --> calling completion event + +APP 2025-04-04T20:18:05.649Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:05.649Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:05.649Z --> copilot | completion request + +APP 2025-04-04T20:18:05.650Z --> fetch | /copilot_internal/v2/token + +APP 2025-04-04T20:18:05.885Z --> received didChange | language: markdown | contentVersion: 2142 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:05.978Z --> received didChange | language: markdown | contentVersion: 2143 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:06.092Z --> received didChange | language: markdown | contentVersion: 2144 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:06.097Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":4,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":449} + +APP 2025-04-04T20:18:06.133Z --> response | https://api.github.com/copilot_internal/v2/token | 200 + +APP 2025-04-04T20:18:06.133Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:06.297Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2144} + +APP 2025-04-04T20:18:06.298Z --> calling completion event + +APP 2025-04-04T20:18:06.298Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:06.298Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:06.298Z --> copilot | completion request + +APP 2025-04-04T20:18:06.298Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:06.298Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:06.298Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:06.452Z --> received didChange | language: markdown | contentVersion: 2145 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:06.708Z --> received didChange | language: markdown | contentVersion: 2146 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:06.944Z --> received didChange | language: markdown | contentVersion: 2147 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:06.952Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:06.953Z --> completion hints: | you in the next blog post, where we will install k3s on the Rocky Linux VMs. + +APP 2025-04-04T20:18:06.953Z --> sent request | {"jsonrpc":"2.0","id":449,"result":{"isIncomplete":false,"items":[{"label":"you in the next blog post, where we will install k3s on the Rocky Linux VMs.","kind":1,"preselect":true,"detail":"you in the next blog post, where we will install k3s on the Rocky Linux VMs.","insertText":"you in the next blog post, where we will install k3s on the Rocky Linux VMs.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":80},"end":{"line":466,"character":80}}}]}]}} + +APP 2025-04-04T20:18:06.953Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:13.256Z --> received didChange | language: markdown | contentVersion: 2148 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:13.372Z --> received didChange | language: markdown | contentVersion: 2149 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:14.006Z --> received didChange | language: markdown | contentVersion: 2150 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:14.012Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":8,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":450} + +APP 2025-04-04T20:18:14.118Z --> received didChange | language: markdown | contentVersion: 2151 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:14.166Z --> received didChange | language: markdown | contentVersion: 2152 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:14.213Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2152} + +APP 2025-04-04T20:18:14.213Z --> skipping because content is stale + +APP 2025-04-04T20:18:14.213Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:14.213Z --> sent request | {"jsonrpc":"2.0","id":450,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:14.232Z --> received didChange | language: markdown | contentVersion: 2153 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:14.236Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":11,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":451} + +APP 2025-04-04T20:18:14.280Z --> received didChange | language: markdown | contentVersion: 2154 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:14.390Z --> received didChange | language: markdown | contentVersion: 2155 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:14.436Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in th\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2155} + +APP 2025-04-04T20:18:14.437Z --> skipping because content is stale + +APP 2025-04-04T20:18:14.437Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:14.437Z --> sent request | {"jsonrpc":"2.0","id":451,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:14.465Z --> received didChange | language: markdown | contentVersion: 2156 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:14.546Z --> received didChange | language: markdown | contentVersion: 2157 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:14.552Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":15,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":452} + +APP 2025-04-04T20:18:14.654Z --> received didChange | language: markdown | contentVersion: 2158 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:14.671Z --> received didChange | language: markdown | contentVersion: 2159 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:14.753Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the ne\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2159} + +APP 2025-04-04T20:18:14.753Z --> skipping because content is stale + +APP 2025-04-04T20:18:14.754Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:14.754Z --> sent request | {"jsonrpc":"2.0","id":452,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:14.819Z --> received didChange | language: markdown | contentVersion: 2160 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:15.033Z --> received didChange | language: markdown | contentVersion: 2161 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:15.127Z --> received didChange | language: markdown | contentVersion: 2162 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:15.133Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":20,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":453} + +APP 2025-04-04T20:18:15.287Z --> received didChange | language: markdown | contentVersion: 2163 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:15.335Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next b\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2163} + +APP 2025-04-04T20:18:15.335Z --> skipping because content is stale + +APP 2025-04-04T20:18:15.335Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:15.335Z --> sent request | {"jsonrpc":"2.0","id":453,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:15.471Z --> received didChange | language: markdown | contentVersion: 2164 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:15.643Z --> received didChange | language: markdown | contentVersion: 2165 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:15.649Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:15.710Z --> received didChange | language: markdown | contentVersion: 2166 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:15.822Z --> received didChange | language: markdown | contentVersion: 2167 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:15.828Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":25,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":454} + +APP 2025-04-04T20:18:16.029Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2167} + +APP 2025-04-04T20:18:16.029Z --> calling completion event + +APP 2025-04-04T20:18:16.029Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:16.029Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:16.030Z --> copilot | completion request + +APP 2025-04-04T20:18:16.030Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:16.134Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:16.134Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:16.298Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:16.298Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:16.354Z --> received didChange | language: markdown | contentVersion: 2168 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:16.463Z --> received didChange | language: markdown | contentVersion: 2169 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:16.578Z --> received didChange | language: markdown | contentVersion: 2170 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:16.767Z --> received didChange | language: markdown | contentVersion: 2171 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:19.216Z --> received didChange | language: markdown | contentVersion: 2172 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:19.222Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":30,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":455} + +APP 2025-04-04T20:18:19.375Z --> received didChange | language: markdown | contentVersion: 2173 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:19.423Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post o\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2173} + +APP 2025-04-04T20:18:19.423Z --> skipping because content is stale + +APP 2025-04-04T20:18:19.423Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:19.423Z --> sent request | {"jsonrpc":"2.0","id":455,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:19.477Z --> received didChange | language: markdown | contentVersion: 2174 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:19.539Z --> received didChange | language: markdown | contentVersion: 2175 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:19.545Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":33,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":456} + +APP 2025-04-04T20:18:19.686Z --> received didChange | language: markdown | contentVersion: 2176 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:19.746Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of t\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2176} + +APP 2025-04-04T20:18:19.746Z --> skipping because content is stale + +APP 2025-04-04T20:18:19.746Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:19.746Z --> sent request | {"jsonrpc":"2.0","id":456,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:19.797Z --> received didChange | language: markdown | contentVersion: 2177 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:19.836Z --> received didChange | language: markdown | contentVersion: 2178 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:19.949Z --> received didChange | language: markdown | contentVersion: 2179 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:19.982Z --> received didChange | language: markdown | contentVersion: 2180 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:19.988Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":38,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":457} + +APP 2025-04-04T20:18:20.129Z --> received didChange | language: markdown | contentVersion: 2181 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:20.190Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this s\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2181} + +APP 2025-04-04T20:18:20.190Z --> skipping because content is stale + +APP 2025-04-04T20:18:20.190Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:20.191Z --> sent request | {"jsonrpc":"2.0","id":457,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:20.283Z --> received didChange | language: markdown | contentVersion: 2182 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:20.352Z --> received didChange | language: markdown | contentVersion: 2183 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:20.478Z --> received didChange | language: markdown | contentVersion: 2184 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:20.548Z --> received didChange | language: markdown | contentVersion: 2185 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:20.750Z --> received didChange | language: markdown | contentVersion: 2186 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:20.813Z --> received didChange | language: markdown | contentVersion: 2187 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:20.867Z --> failed to parse line: | failed to parse | Content-Length: 18942
+
+ + +APP 2025-04-04T20:18:20.869Z --> received didChange | language: markdown | contentVersion: 2188 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:20.874Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":46,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":458} + +APP 2025-04-04T20:18:21.075Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2188} + +APP 2025-04-04T20:18:21.076Z --> calling completion event + +APP 2025-04-04T20:18:21.076Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:21.076Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:21.076Z --> copilot | completion request + +APP 2025-04-04T20:18:21.076Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:21.596Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:21.596Z --> completion hints: | + +APP 2025-04-04T20:18:21.597Z --> sent request | {"jsonrpc":"2.0","id":458,"result":{"isIncomplete":false,"items":[{"label":"","kind":1,"preselect":true,"detail":"","insertText":"","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":46},"end":{"line":466,"character":46}}}]}]}} + +APP 2025-04-04T20:18:21.597Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:23.988Z --> received didChange | language: markdown | contentVersion: 2189 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:24.146Z --> received didChange | language: markdown | contentVersion: 2190 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:24.274Z --> received didChange | language: markdown | contentVersion: 2191 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:24.511Z --> received didChange | language: markdown | contentVersion: 2192 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:24.572Z --> received didChange | language: markdown | contentVersion: 2193 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:24.717Z --> received didChange | language: markdown | contentVersion: 2194 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:24.723Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":52,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":459} + +APP 2025-04-04T20:18:24.799Z --> received didChange | language: markdown | contentVersion: 2195 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:24.925Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe w\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2195} + +APP 2025-04-04T20:18:24.925Z --> skipping because content is stale + +APP 2025-04-04T20:18:24.925Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:24.925Z --> sent request | {"jsonrpc":"2.0","id":459,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:24.982Z --> received didChange | language: markdown | contentVersion: 2196 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:25.195Z --> received didChange | language: markdown | contentVersion: 2197 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:25.201Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":55,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":460} + +APP 2025-04-04T20:18:25.402Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2197} + +APP 2025-04-04T20:18:25.403Z --> calling completion event + +APP 2025-04-04T20:18:25.403Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:25.403Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:25.403Z --> copilot | completion request + +APP 2025-04-04T20:18:25.404Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:25.793Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:25.794Z --> completion hints: | will install k3s on the Rocky Linux VMs. Or maybe we will do something else. I am not sure yet. + +APP 2025-04-04T20:18:25.795Z --> sent request | {"jsonrpc":"2.0","id":460,"result":{"isIncomplete":false,"items":[{"label":"will install k3s on the Rocky Linux VMs. Or maybe we will do something else. I am not sure yet.","kind":1,"preselect":true,"detail":"will install k3s on the Rocky Linux VMs. Or maybe we will do something else. I am not sure yet.","insertText":"will install k3s on the Rocky Linux VMs. Or maybe we will do something else. I am not sure yet.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":150},"end":{"line":466,"character":150}}}]}]}} + +APP 2025-04-04T20:18:25.795Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:26.029Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:26.134Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:26.190Z --> received didChange | language: markdown | contentVersion: 2198 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:26.336Z --> received didChange | language: markdown | contentVersion: 2199 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:26.556Z --> received didChange | language: markdown | contentVersion: 2200 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:26.693Z --> received didChange | language: markdown | contentVersion: 2201 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:26.754Z --> received didChange | language: markdown | contentVersion: 2202 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:26.761Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":60,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":461} + +APP 2025-04-04T20:18:26.915Z --> received didChange | language: markdown | contentVersion: 2203 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:26.963Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will b\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2203} + +APP 2025-04-04T20:18:26.963Z --> skipping because content is stale + +APP 2025-04-04T20:18:26.963Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:26.963Z --> sent request | {"jsonrpc":"2.0","id":461,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:27.012Z --> received didChange | language: markdown | contentVersion: 2204 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:27.117Z --> received didChange | language: markdown | contentVersion: 2205 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:27.122Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":63,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":462} + +APP 2025-04-04T20:18:27.146Z --> received didChange | language: markdown | contentVersion: 2206 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:27.213Z --> received didChange | language: markdown | contentVersion: 2207 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:27.287Z --> received didChange | language: markdown | contentVersion: 2208 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:27.324Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be ins\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2208} + +APP 2025-04-04T20:18:27.324Z --> skipping because content is stale + +APP 2025-04-04T20:18:27.324Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:27.324Z --> sent request | {"jsonrpc":"2.0","id":462,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:27.477Z --> received didChange | language: markdown | contentVersion: 2209 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:27.632Z --> received didChange | language: markdown | contentVersion: 2210 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:27.671Z --> received didChange | language: markdown | contentVersion: 2211 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:27.851Z --> received didChange | language: markdown | contentVersion: 2212 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:28.077Z --> received didChange | language: markdown | contentVersion: 2213 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:28.086Z --> received didChange | language: markdown | contentVersion: 2214 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:28.135Z --> received didChange | language: markdown | contentVersion: 2215 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:28.221Z --> received didChange | language: markdown | contentVersion: 2216 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:28.227Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":74,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":463} + +APP 2025-04-04T20:18:28.429Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2216} + +APP 2025-04-04T20:18:28.429Z --> calling completion event + +APP 2025-04-04T20:18:28.429Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:28.429Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:28.429Z --> copilot | completion request + +APP 2025-04-04T20:18:28.430Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:28.533Z --> received didChange | language: markdown | contentVersion: 2217 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:28.611Z --> received didChange | language: markdown | contentVersion: 2218 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:28.868Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:28.869Z --> completion hints: | k3s on the VMs, or maybe we will be doing something else. I am not sure yet. But I am sure it will be fun! + +APP 2025-04-04T20:18:28.869Z --> sent request | {"jsonrpc":"2.0","id":463,"result":{"isIncomplete":false,"items":[{"label":"k3s on the VMs, or maybe we will be doing something else. I am not sure yet. But I am sure it will be fun!","kind":1,"preselect":true,"detail":"k3s on the VMs, or maybe we will be doing something else. I am not sure yet. But I am sure it will be fun!","insertText":"k3s on the VMs, or maybe we will be doing something else. I am not sure yet. But I am sure it will be fun!","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":180},"end":{"line":466,"character":180}}}]}]}} + +APP 2025-04-04T20:18:28.869Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:28.892Z --> received didChange | language: markdown | contentVersion: 2219 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:29.138Z --> received didChange | language: markdown | contentVersion: 2220 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:29.273Z --> received didChange | language: markdown | contentVersion: 2221 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:29.510Z --> received didChange | language: markdown | contentVersion: 2222 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:29.737Z --> received didChange | language: markdown | contentVersion: 2223 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:29.817Z --> received didChange | language: markdown | contentVersion: 2224 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:29.822Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":82,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":464} + +APP 2025-04-04T20:18:30.022Z --> received didChange | language: markdown | contentVersion: 2225 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:30.024Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly e\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2225} + +APP 2025-04-04T20:18:30.024Z --> skipping because content is stale + +APP 2025-04-04T20:18:30.024Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:30.024Z --> sent request | {"jsonrpc":"2.0","id":464,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:30.462Z --> received didChange | language: markdown | contentVersion: 2226 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:30.519Z --> received didChange | language: markdown | contentVersion: 2227 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:30.715Z --> received didChange | language: markdown | contentVersion: 2228 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:30.924Z --> received didChange | language: markdown | contentVersion: 2229 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:30.994Z --> received didChange | language: markdown | contentVersion: 2230 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:31.077Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:31.246Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":86,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":465} + +APP 2025-04-04T20:18:31.269Z --> received didChange | language: markdown | contentVersion: 2231 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:31.382Z --> received didChange | language: markdown | contentVersion: 2232 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:31.447Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly availa\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2232} + +APP 2025-04-04T20:18:31.447Z --> skipping because content is stale + +APP 2025-04-04T20:18:31.447Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:31.447Z --> sent request | {"jsonrpc":"2.0","id":465,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:31.485Z --> received didChange | language: markdown | contentVersion: 2233 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:31.682Z --> received didChange | language: markdown | contentVersion: 2234 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:31.742Z --> received didChange | language: markdown | contentVersion: 2235 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:31.932Z --> received didChange | language: markdown | contentVersion: 2236 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:31.939Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":92,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":466} + +APP 2025-04-04T20:18:32.140Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2236} + +APP 2025-04-04T20:18:32.141Z --> calling completion event + +APP 2025-04-04T20:18:32.141Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:32.141Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:32.141Z --> copilot | completion request + +APP 2025-04-04T20:18:32.141Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:33.188Z --> received didChange | language: markdown | contentVersion: 2237 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:33.390Z --> received didChange | language: markdown | contentVersion: 2238 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:33.497Z --> received didChange | language: markdown | contentVersion: 2239 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:33.570Z --> received didChange | language: markdown | contentVersion: 2240 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:33.715Z --> received didChange | language: markdown | contentVersion: 2241 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:33.875Z --> received didChange | language: markdown | contentVersion: 2242 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:33.925Z --> received didChange | language: markdown | contentVersion: 2243 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:34.035Z --> received didChange | language: markdown | contentVersion: 2244 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:34.041Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":100,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":467} + +APP 2025-04-04T20:18:34.243Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2244} + +APP 2025-04-04T20:18:34.243Z --> calling completion event + +APP 2025-04-04T20:18:34.243Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:34.244Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:34.244Z --> copilot | completion request + +APP 2025-04-04T20:18:34.244Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:34.244Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:34.244Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:34.787Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:34.787Z --> completion hints: | for the VMs or setting up a Kubernetes cluster on the VMs. I haven't decided yet. + +APP 2025-04-04T20:18:34.788Z --> sent request | {"jsonrpc":"2.0","id":467,"result":{"isIncomplete":false,"items":[{"label":"for the VMs or setting up a Kubernetes cluster on the VMs. I haven't decided yet.","kind":1,"preselect":true,"detail":"for the VMs or setting up a Kubernetes cluster on the VMs. I haven't decided yet.","insertText":"for the VMs or setting up a Kubernetes cluster on the VMs. I haven't decided yet.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":181},"end":{"line":466,"character":181}}}]}]}} + +APP 2025-04-04T20:18:34.788Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:34.886Z --> received didChange | language: markdown | contentVersion: 2245 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:34.996Z --> received didChange | language: markdown | contentVersion: 2246 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:35.135Z --> received didChange | language: markdown | contentVersion: 2247 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:35.237Z --> received didChange | language: markdown | contentVersion: 2248 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:35.300Z --> received didChange | language: markdown | contentVersion: 2249 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:35.305Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":105,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":468} + +APP 2025-04-04T20:18:35.404Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:35.507Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2249} + +APP 2025-04-04T20:18:35.507Z --> calling completion event + +APP 2025-04-04T20:18:35.507Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:35.507Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:35.507Z --> copilot | completion request + +APP 2025-04-04T20:18:35.508Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:35.578Z --> received didChange | language: markdown | contentVersion: 2250 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:35.655Z --> received didChange | language: markdown | contentVersion: 2251 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:35.739Z --> received didChange | language: markdown | contentVersion: 2252 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:35.932Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:35.932Z --> completion hints: | Ceph or something similar. Or maybe we will be installing k3s on the VMs. I haven't decided yet. + +APP 2025-04-04T20:18:35.932Z --> sent request | {"jsonrpc":"2.0","id":468,"result":{"isIncomplete":false,"items":[{"label":"Ceph or something similar. Or maybe we will be installing k3s on the VMs. I haven't decided yet.","kind":1,"preselect":true,"detail":"Ceph or something similar. Or maybe we will be installing k3s on the VMs. I haven't decided yet.","insertText":"Ceph or something similar. Or maybe we will be installing k3s on the VMs. I haven't decided yet.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":201},"end":{"line":466,"character":201}}}]}]}} + +APP 2025-04-04T20:18:35.932Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:36.085Z --> received didChange | language: markdown | contentVersion: 2253 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:36.659Z --> received didChange | language: markdown | contentVersion: 2254 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:36.664Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":110,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":469} + +APP 2025-04-04T20:18:36.866Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2254} + +APP 2025-04-04T20:18:36.867Z --> calling completion event + +APP 2025-04-04T20:18:36.867Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:36.867Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:36.867Z --> copilot | completion request + +APP 2025-04-04T20:18:36.867Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:36.907Z --> received didChange | language: markdown | contentVersion: 2255 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:36.957Z --> received didChange | language: markdown | contentVersion: 2256 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:37.120Z --> received didChange | language: markdown | contentVersion: 2257 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:37.126Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":113,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":470} + +APP 2025-04-04T20:18:37.211Z --> received didChange | language: markdown | contentVersion: 2258 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:37.265Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:37.266Z --> completion hints: | or GlusterFS or Ceph. Or maybe we will be installing k3s on the VMs. I haven't decided yet. + +APP 2025-04-04T20:18:37.266Z --> sent request | {"jsonrpc":"2.0","id":469,"result":{"isIncomplete":false,"items":[{"label":"or GlusterFS or Ceph. Or maybe we will be installing k3s on the VMs. I haven't decided yet.","kind":1,"preselect":true,"detail":"or GlusterFS or Ceph. Or maybe we will be installing k3s on the VMs. I haven't decided yet.","insertText":"or GlusterFS or Ceph. Or maybe we will be installing k3s on the VMs. I haven't decided yet.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":201},"end":{"line":466,"character":201}}}]}]}} + +APP 2025-04-04T20:18:37.266Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:37.328Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or w\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2258} + +APP 2025-04-04T20:18:37.328Z --> skipping because content is stale + +APP 2025-04-04T20:18:37.328Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:37.328Z --> sent request | {"jsonrpc":"2.0","id":470,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:37.398Z --> received didChange | language: markdown | contentVersion: 2259 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:37.537Z --> received didChange | language: markdown | contentVersion: 2260 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:37.542Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":116,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":471} + +APP 2025-04-04T20:18:37.688Z --> received didChange | language: markdown | contentVersion: 2261 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:37.744Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we s\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2261} + +APP 2025-04-04T20:18:37.744Z --> skipping because content is stale + +APP 2025-04-04T20:18:37.744Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:37.744Z --> sent request | {"jsonrpc":"2.0","id":471,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:37.926Z --> received didChange | language: markdown | contentVersion: 2262 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:38.135Z --> received didChange | language: markdown | contentVersion: 2263 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:38.248Z --> received didChange | language: markdown | contentVersion: 2264 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:38.430Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:38.438Z --> received didChange | language: markdown | contentVersion: 2265 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:38.551Z --> received didChange | language: markdown | contentVersion: 2266 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:38.556Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":122,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":472} + +APP 2025-04-04T20:18:38.688Z --> received didChange | language: markdown | contentVersion: 2267 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:38.757Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start s\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2267} + +APP 2025-04-04T20:18:38.757Z --> skipping because content is stale + +APP 2025-04-04T20:18:38.757Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:38.757Z --> sent request | {"jsonrpc":"2.0","id":472,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:38.856Z --> received didChange | language: markdown | contentVersion: 2268 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:38.934Z --> received didChange | language: markdown | contentVersion: 2269 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:39.072Z --> received didChange | language: markdown | contentVersion: 2270 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:39.166Z --> received didChange | language: markdown | contentVersion: 2271 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:39.219Z --> received didChange | language: markdown | contentVersion: 2272 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:39.277Z --> received didChange | language: markdown | contentVersion: 2273 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:39.358Z --> received didChange | language: markdown | contentVersion: 2274 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:39.364Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":130,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":473} + +APP 2025-04-04T20:18:39.491Z --> received didChange | language: markdown | contentVersion: 2275 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:39.565Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting u\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2275} + +APP 2025-04-04T20:18:39.565Z --> skipping because content is stale + +APP 2025-04-04T20:18:39.565Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:39.565Z --> sent request | {"jsonrpc":"2.0","id":473,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:39.733Z --> received didChange | language: markdown | contentVersion: 2276 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:39.795Z --> received didChange | language: markdown | contentVersion: 2277 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:39.800Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":133,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":474} + +APP 2025-04-04T20:18:40.002Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting up \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2277} + +APP 2025-04-04T20:18:40.003Z --> calling completion event + +APP 2025-04-04T20:18:40.003Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:40.003Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:40.003Z --> copilot | completion request + +APP 2025-04-04T20:18:40.004Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:40.341Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:40.341Z --> completion hints: | k3s on the VMs. I haven't decided yet. + +APP 2025-04-04T20:18:40.341Z --> sent request | {"jsonrpc":"2.0","id":474,"result":{"isIncomplete":false,"items":[{"label":"k3s on the VMs. I haven't decided yet.","kind":1,"preselect":true,"detail":"k3s on the VMs. I haven't decided yet.","insertText":"k3s on the VMs. I haven't decided yet.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":171},"end":{"line":466,"character":171}}}]}]}} + +APP 2025-04-04T20:18:40.341Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:40.509Z --> received didChange | language: markdown | contentVersion: 2278 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:40.577Z --> received didChange | language: markdown | contentVersion: 2279 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:40.630Z --> received didChange | language: markdown | contentVersion: 2280 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:40.881Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":136,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":475} + +APP 2025-04-04T20:18:40.893Z --> received didChange | language: markdown | contentVersion: 2281 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:40.898Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":137,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":476} + +APP 2025-04-04T20:18:41.099Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting up our \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2281} + +APP 2025-04-04T20:18:41.099Z --> calling completion event + +APP 2025-04-04T20:18:41.099Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:41.100Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:41.100Z --> copilot | completion request + +APP 2025-04-04T20:18:41.100Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:41.264Z --> received didChange | language: markdown | contentVersion: 2282 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:41.370Z --> received didChange | language: markdown | contentVersion: 2283 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:41.448Z --> received didChange | language: markdown | contentVersion: 2284 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:41.461Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:41.461Z --> completion hints: | k3s cluster on the VMs. I haven't decided yet. + +APP 2025-04-04T20:18:41.462Z --> sent request | {"jsonrpc":"2.0","id":476,"result":{"isIncomplete":false,"items":[{"label":"k3s cluster on the VMs. I haven't decided yet.","kind":1,"preselect":true,"detail":"k3s cluster on the VMs. I haven't decided yet.","insertText":"k3s cluster on the VMs. I haven't decided yet.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":183},"end":{"line":466,"character":183}}}]}]}} + +APP 2025-04-04T20:18:41.462Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:41.663Z --> received didChange | language: markdown | contentVersion: 2285 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:41.813Z --> received didChange | language: markdown | contentVersion: 2286 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:41.938Z --> received didChange | language: markdown | contentVersion: 2287 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:41.943Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":143,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":477} + +APP 2025-04-04T20:18:41.990Z --> received didChange | language: markdown | contentVersion: 2288 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:42.079Z --> received didChange | language: markdown | contentVersion: 2289 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:42.141Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:42.144Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting up our first k3\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2289} + +APP 2025-04-04T20:18:42.144Z --> skipping because content is stale + +APP 2025-04-04T20:18:42.144Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:42.144Z --> sent request | {"jsonrpc":"2.0","id":477,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:42.349Z --> received didChange | language: markdown | contentVersion: 2290 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:42.600Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":146,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":478} + +APP 2025-04-04T20:18:42.800Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting up our first k3s\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2290} + +APP 2025-04-04T20:18:42.801Z --> calling completion event + +APP 2025-04-04T20:18:42.801Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:42.801Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:42.801Z --> copilot | completion request + +APP 2025-04-04T20:18:42.801Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:43.203Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:43.204Z --> completion hints: | cluster with the 3 Rocky Linux VMs. I haven't decided yet. + +APP 2025-04-04T20:18:43.204Z --> sent request | {"jsonrpc":"2.0","id":478,"result":{"isIncomplete":false,"items":[{"label":"cluster with the 3 Rocky Linux VMs. I haven't decided yet.","kind":1,"preselect":true,"detail":"cluster with the 3 Rocky Linux VMs. I haven't decided yet.","insertText":"cluster with the 3 Rocky Linux VMs. I haven't decided yet.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":204},"end":{"line":466,"character":204}}}]}]}} + +APP 2025-04-04T20:18:43.204Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:44.244Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:44.244Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:45.508Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:46.868Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:50.005Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:50.809Z --> received didChange | language: markdown | contentVersion: 2291 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:50.814Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":147,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":479} + +APP 2025-04-04T20:18:51.015Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting up our first k3s \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2291} + +APP 2025-04-04T20:18:51.016Z --> calling completion event + +APP 2025-04-04T20:18:51.016Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:51.016Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:51.016Z --> copilot | completion request + +APP 2025-04-04T20:18:51.016Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:51.101Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:51.101Z --> sending diagnostics | [{"message":"","severity":1,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:51.101Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"","severity":1,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:51.765Z --> received didChange | language: markdown | contentVersion: 2292 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:52.265Z --> received didChange | language: markdown | contentVersion: 2293 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:52.293Z --> received didChange | language: markdown | contentVersion: 2294 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:52.321Z --> received didChange | language: markdown | contentVersion: 2295 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:52.351Z --> received didChange | language: markdown | contentVersion: 2296 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:52.381Z --> received didChange | language: markdown | contentVersion: 2297 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:52.412Z --> received didChange | language: markdown | contentVersion: 2298 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:52.443Z --> received didChange | language: markdown | contentVersion: 2299 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:52.472Z --> received didChange | language: markdown | contentVersion: 2300 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:52.624Z --> received didChange | language: markdown | contentVersion: 2301 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:52.801Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:52.838Z --> received didChange | language: markdown | contentVersion: 2302 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:53.032Z --> received didChange | language: markdown | contentVersion: 2303 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:53.213Z --> received didChange | language: markdown | contentVersion: 2304 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:53.388Z --> received didChange | language: markdown | contentVersion: 2305 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:53.749Z --> received didChange | language: markdown | contentVersion: 2306 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:54.216Z --> received didChange | language: markdown | contentVersion: 2307 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:54.466Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":135,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":480} + +APP 2025-04-04T20:18:54.570Z --> received didChange | language: markdown | contentVersion: 2308 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:54.668Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting up k3s\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2308} + +APP 2025-04-04T20:18:54.668Z --> skipping because content is stale + +APP 2025-04-04T20:18:54.668Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:54.668Z --> sent request | {"jsonrpc":"2.0","id":480,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:55.389Z --> received didChange | language: markdown | contentVersion: 2309 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:55.395Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":137,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":481} + +APP 2025-04-04T20:18:55.596Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting up k3s \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2309} + +APP 2025-04-04T20:18:55.596Z --> calling completion event + +APP 2025-04-04T20:18:55.596Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:55.596Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:55.596Z --> copilot | completion request + +APP 2025-04-04T20:18:55.597Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:56.028Z --> received didChange | language: markdown | contentVersion: 2310 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:56.105Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:56.106Z --> completion hints: | on the VMs. I haven't decided yet. + +APP 2025-04-04T20:18:56.106Z --> sent request | {"jsonrpc":"2.0","id":481,"result":{"isIncomplete":false,"items":[{"label":"on the VMs. I haven't decided yet.","kind":1,"preselect":true,"detail":"on the VMs. I haven't decided yet.","insertText":"on the VMs. I haven't decided yet.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":171},"end":{"line":466,"character":171}}}]}]}} + +APP 2025-04-04T20:18:56.106Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:56.200Z --> received didChange | language: markdown | contentVersion: 2311 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:56.262Z --> received didChange | language: markdown | contentVersion: 2312 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:56.268Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":140,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":482} + +APP 2025-04-04T20:18:56.336Z --> received didChange | language: markdown | contentVersion: 2313 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:56.446Z --> received didChange | language: markdown | contentVersion: 2314 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:56.469Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting up k3s on th\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2314} + +APP 2025-04-04T20:18:56.469Z --> skipping because content is stale + +APP 2025-04-04T20:18:56.469Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:56.469Z --> sent request | {"jsonrpc":"2.0","id":482,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:56.543Z --> received didChange | language: markdown | contentVersion: 2315 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:56.629Z --> received didChange | language: markdown | contentVersion: 2316 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:56.635Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":144,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":483} + +APP 2025-04-04T20:18:56.769Z --> received didChange | language: markdown | contentVersion: 2317 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:56.831Z --> received didChange | language: markdown | contentVersion: 2318 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:56.835Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting up k3s on the Ro\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2318} + +APP 2025-04-04T20:18:56.835Z --> skipping because content is stale + +APP 2025-04-04T20:18:56.835Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:56.835Z --> sent request | {"jsonrpc":"2.0","id":483,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:56.978Z --> received didChange | language: markdown | contentVersion: 2319 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:57.071Z --> received didChange | language: markdown | contentVersion: 2320 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:57.293Z --> received didChange | language: markdown | contentVersion: 2321 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:57.363Z --> received didChange | language: markdown | contentVersion: 2322 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:57.369Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":150,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":484} + +APP 2025-04-04T20:18:57.540Z --> received didChange | language: markdown | contentVersion: 2323 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:57.571Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting up k3s on the Rocky L\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2323} + +APP 2025-04-04T20:18:57.571Z --> skipping because content is stale + +APP 2025-04-04T20:18:57.571Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:57.571Z --> sent request | {"jsonrpc":"2.0","id":484,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:18:57.618Z --> received didChange | language: markdown | contentVersion: 2324 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:57.846Z --> received didChange | language: markdown | contentVersion: 2325 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:57.898Z --> received didChange | language: markdown | contentVersion: 2326 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:57.970Z --> received didChange | language: markdown | contentVersion: 2327 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:58.055Z --> received didChange | language: markdown | contentVersion: 2328 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:58.062Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":156,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":485} + +APP 2025-04-04T20:18:58.263Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing hilghly available storage with HAST or we start setting up k3s on the Rocky Linux \n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2328} + +APP 2025-04-04T20:18:58.263Z --> calling completion event + +APP 2025-04-04T20:18:58.264Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:18:58.264Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:18:58.264Z --> copilot | completion request + +APP 2025-04-04T20:18:58.264Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:18:58.522Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:18:58.523Z --> completion hints: | VMs. I haven't decided yet. + +APP 2025-04-04T20:18:58.523Z --> sent request | {"jsonrpc":"2.0","id":485,"result":{"isIncomplete":false,"items":[{"label":"VMs. I haven't decided yet.","kind":1,"preselect":true,"detail":"VMs. I haven't decided yet.","insertText":"VMs. I haven't decided yet.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":183},"end":{"line":466,"character":183}}}]}]}} + +APP 2025-04-04T20:18:58.523Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:18:59.437Z --> received didChange | language: markdown | contentVersion: 2329 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:59.575Z --> received didChange | language: markdown | contentVersion: 2330 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:59.816Z --> received didChange | language: markdown | contentVersion: 2331 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:18:59.942Z --> received didChange | language: markdown | contentVersion: 2332 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:01.017Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:19:01.102Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:19:05.597Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:19:08.265Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:19:09.554Z --> received didChange | language: markdown | contentVersion: 2333 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:09.647Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":74,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":486} + +APP 2025-04-04T20:19:09.849Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2333} + +APP 2025-04-04T20:19:09.850Z --> calling completion event + +APP 2025-04-04T20:19:09.850Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:19:09.850Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:19:09.850Z --> copilot | completion request + +APP 2025-04-04T20:19:09.850Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:19:09.867Z --> received didChange | language: markdown | contentVersion: 2334 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:09.947Z --> received didChange | language: markdown | contentVersion: 2335 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:10.201Z --> received didChange | language: markdown | contentVersion: 2336 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:10.339Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:19:10.341Z --> completion hints: | k3s on the VMs, or maybe we will be doing something else. I am not sure yet. But I am sure it will be fun! + +APP 2025-04-04T20:19:10.341Z --> sent request | {"jsonrpc":"2.0","id":486,"result":{"isIncomplete":false,"items":[{"label":"k3s on the VMs, or maybe we will be doing something else. I am not sure yet. But I am sure it will be fun!","kind":1,"preselect":true,"detail":"k3s on the VMs, or maybe we will be doing something else. I am not sure yet. But I am sure it will be fun!","insertText":"k3s on the VMs, or maybe we will be doing something else. I am not sure yet. But I am sure it will be fun!","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":180},"end":{"line":466,"character":258}}}]}]}} + +APP 2025-04-04T20:19:10.341Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:19:10.577Z --> received didChange | language: markdown | contentVersion: 2337 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:10.972Z --> received didChange | language: markdown | contentVersion: 2338 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:11.218Z --> received didChange | language: markdown | contentVersion: 2339 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:13.204Z --> received didChange | language: markdown | contentVersion: 2340 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:13.282Z --> received didChange | language: markdown | contentVersion: 2341 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:13.456Z --> received didChange | language: markdown | contentVersion: 2342 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:13.532Z --> received didChange | language: markdown | contentVersion: 2343 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:13.783Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerKind":1},"position":{"character":78,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":487} + +APP 2025-04-04T20:19:13.803Z --> received didChange | language: markdown | contentVersion: 2344 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:13.985Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highlavailable storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2344} + +APP 2025-04-04T20:19:13.985Z --> skipping because content is stale + +APP 2025-04-04T20:19:13.985Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:19:13.985Z --> sent request | {"jsonrpc":"2.0","id":487,"result":{"isIncomplete":false,"items":[]}} + +APP 2025-04-04T20:19:14.256Z --> received didChange | language: markdown | contentVersion: 2345 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:14.335Z --> received didChange | language: markdown | contentVersion: 2346 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:19:14.341Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":81,"line":466},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":488} + +APP 2025-04-04T20:19:14.542Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 8GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2346} + +APP 2025-04-04T20:19:14.542Z --> calling completion event + +APP 2025-04-04T20:19:14.542Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}}}] + +APP 2025-04-04T20:19:14.542Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":466,"character":0},"end":{"line":467,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:19:14.542Z --> copilot | completion request + +APP 2025-04-04T20:19:14.543Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:19:16.585Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:19:16.624Z --> completion hints: | available k3s on the FreeBSD hosts. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. + +APP 2025-04-04T20:19:16.624Z --> sent request | {"jsonrpc":"2.0","id":488,"result":{"isIncomplete":false,"items":[{"label":"available k3s on the FreeBSD hosts. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post.","kind":1,"preselect":true,"detail":"available k3s on the FreeBSD hosts. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post.","insertText":"available k3s on the FreeBSD hosts. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post. Or maybe we will be installing a FreeBSD VM on Bhyve to run the same silly benchmark as we did in this post.","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":466,"character":988},"end":{"line":466,"character":1066}}}]}]}} + +APP 2025-04-04T20:19:16.624Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:19:19.851Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:19:24.542Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:20:32.785Z --> received didChange | language: markdown | contentVersion: 2347 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:20:32.904Z --> received request: | {"jsonrpc":"2.0","method":"textDocument/completion","params":{"context":{"triggerCharacter":" ","triggerKind":2},"position":{"character":164,"line":350},"textDocument":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl"}},"id":489} + +APP 2025-04-04T20:20:33.107Z --> running completion on buffer | {"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","text":"# f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs\n\nThis is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The \"f\" stands for FreeBSD, and the \"3s\" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.\n\n<< template::inline::index f3s-kubernetes-with-freebsd-part\n\n=> ./f3s-kubernetes-with-freebsd-part-1/f3slogo.png f3s logo\n\n<< template::inline::toc\n\n## Introduction\n\nIn this blog post, we are going to install the Bhyve hypervisor.\n\nThe FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve's strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It's efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.\n\nBhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.\n\n## Check for `POPCNT` CPU support\n\nPOPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.\n\nTo check for `POPCNT` support, run:\n\n```sh\npaul@f0:~ % dmesg | grep 'Features2=.*POPCNT'\n Features2=0x7ffafbbf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,\n\tFMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,\n\tOSXSAVE,AVX,F16C,RDRAND>\n```\n\nSo it's there! All good.\n\n## Basic Bhyve setup\n\nFor managing the Bhyve VMs, we are using `vm-bhyve`, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.\n\n=> https://github.com/churchers/vm-bhyve\n\nThe following commands are executed on all three hosts `f0`, `f1`, and `f2`, where `re0` is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):\n\n```sh\npaul@f0:~ % doas pkg install vm-bhyve bhyve-firmware\npaul@f0:~ % doas sysrc vm_enable=YES\nvm_enable: -> YES\npaul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve\nvm_dir: -> zfs:zroot/bhyve\npaul@f0:~ % doas zfs create zroot/bhyve\npaul@f0:~ % doas vm init\npaul@f0:~ % doas vm switch create public\npaul@f0:~ % doas vm switch add public re0\n```\n\nBhyve stores all it's data in the `/bhyve` of the `zroot` ZFS pool:\n\n```sh\npaul@f0:~ % zfs list | grep bhyve\nzroot/bhyve 1.74M 453G 1.74M /zroot/bhyve\n```\n\nFor convenience, we also create this symlink:\n\n```sh\npaul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve\n\n```\n\nNow, Bhyve is ready to rumble, but no VMs are there yet:\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\n```\n\n## Rocky Linux VMs\n\nAs guest VMs I decided to use Rocky Linux.\n\nUsing Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades. Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.\n\n=> https://rockylinux.org/\n\n### ISO download\n\nWe're going to install the Rocky Linux from the latest minimal iso:\n\n```sh\npaul@f0:~ % doas vm iso \\\n https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.5-x86_64-minimal.iso\n/zroot/bhyve/.iso/Rocky-9.5-x86_64-minimal.iso 1808 MB 4780 kBps 06m28s\npaul@f0:/bhyve % doas vm create rocky\n```\n### VM configuration\n\nThe default Bhyve VM configuration looks like this now:\n\n```sh\npaul@f0:/bhyve/rocky % cat rocky.conf\nloader=\"bhyveload\"\ncpu=1\nmemory=256M\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\nuuid=\"1c4655ac-c828-11ef-a920-e8ff1ed71ca0\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\nThe `uuid` and the `network0_mac` differ for each of the three VMs.\n\nBut to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run `doas vm configure rocky` and modified it to:\n\n```\nguest=\"linux\"\nloader=\"uefi\"\nuefi_vars=\"yes\"\ncpu=4\nmemory=14G\nnetwork0_type=\"virtio-net\"\nnetwork0_switch=\"public\"\ndisk0_type=\"virtio-blk\"\ndisk0_name=\"disk0.img\"\ngraphics=\"yes\"\ngraphics_vga=io\nuuid=\"1c45400b-c828-11ef-8871-e8ff1ed71cac\"\nnetwork0_mac=\"58:9c:fc:0d:13:3f\"\n```\n\n### VM installation\n\nTo start the installer from the downloaded ISO, we run:\n\n```sh\npaul@f0:~ % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\nStarting rocky\n * found guest in /zroot/bhyve/rocky\n * booting...\n\npaul@f0:/bhyve/rocky % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 No Locked (f0.lan.buetow.org)\n\npaul@f0:/bhyve/rocky % doas sockstat -4 | grep 5900\nroot bhyve 6079 8 tcp4 *:5900 *:*\n```\n\nPort 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn't seem worth it as we do it only once a year or less often.\n\n### Increase of the disk image\n\nBy default, the VM disk image is only 20G, which is a bit small for our purposes, so I stopped the VMs again, ran `truncate` on the image file to enlarge them to 100G, and re-started the installation:\n\n```sh\npaul@f0:/bhyve/rocky % doas vm stop rocky\npaul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img\npaul@f0:/bhyve/rocky % doas vm install rocky Rocky-9.5-x86_64-minimal.iso\n```\n\n### Connect to VNC\n\nFor the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn't worth the effort. The three VNC addresses of the VMs were `vnc://f0:5900`, `vnc://f1:5900`, and `vnc://f0:5900`.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/1.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/2.png\n\nI primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.\n\n=> ./f3s-kubernetes-with-freebsd-part-4/3.png\n\n=> ./f3s-kubernetes-with-freebsd-part-4/4.png\n\n## After install\n\nWe perform the following steps for all 3 VMs. In the following, the examples are all executed on `f0` (the VM `r0` running on `f0`):\n\n### VM auto-start after host reboot\n\nTo automatically start the VM on the servers, we add the following to the `rc.conf` on the FreeBSD hosts:\n\n```sh\n\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/rc.conf\nvm_list=\"rocky\"\nvm_delay=\"5\"\n```\n\nThe `vm_delay` isn't really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there's now a `Yes` indicator in the `AUTO` column.\n\n```sh\npaul@f0:~ % doas vm list\nNAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE\nrocky default uefi 4 14G 0.0.0.0:5900 Yes [1] Running (2063)\n```\n\n### Static IP configuration\n\nAfter that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the 3 FreeBSD hosts were already in my `/etc/hosts` file:\n\n```\n192.168.1.130 f0 f0.lan f0.lan.buetow.org\n192.168.1.131 f1 f1.lan f1.lan.buetow.org\n192.168.1.132 f2 f2.lan f2.lan.buetow.org\n```\n\nFor the Rocky VMs, we add those to the FreeBSD host systems as well:\n\n```sh\npaul@f0:/bhyve/rocky % cat <<END | doas tee -a /etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n```\n\nAnd we configure the IPs accordingly on the VMs themselves by opening a root shell via RDP to the VMs and entering the following commands on each of the VMs:\n\n```sh\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.address 192.168.1.120/24\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.gateway 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.DNS 192.168.1.1\n[root@r0 ~] % dnmcli connection modify enp0s5 ipv4.method manual\n[root@r0 ~] % dnmcli connection down enp0s5\n[root@r0 ~] % dnmcli connection up enp0s5\n[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org\n[root@r0 ~] % cat <<END >>/etc/hosts\n192.168.1.120 r0 r0.lan r0.lan.buetow.org\n192.168.1.121 r1 r1.lan r1.lan.buetow.org\n192.168.1.122 r2 r2.lan r2.lan.buetow.org\nEND\n````\n\nWhereas:\n\n* `192.168.1.120` is the IP of the VM itself (here: `r0.lan.buetow.org`)\n* `192.168.1.1` is the address of my home router, which also does DNS.\n\n### Permitting root login\n\nAs these VMs aren't directly reachable via SSH from the internet, we enable `root` login by adding a line with `PermitRootLogin yes` to `/etc/sshd/sshd_config`.\n\nOnce done, we reboot the VM by running `reboot` inside the VM to test whether everything was configured and persisted correctly.\n\nAfter reboot, I copied my public key from my Laptop to the 3 VMs:\n\n```sh\n% for i in 0 1 2; do ssh-copy-id root@r$i.lan.buetow.org; done\n```\n\nThen, I edited the `/etc/ssh/sshd_config` file again on all 3 VMs and configured `PasswordAuthentication no` to only allow SSH key authentication from now on.\n\n### Install latest updates\n\n```sh\n[root@r0 ~] % dnf update\n[root@r0 ~] % reboot\n```\n\n## Stress testing CPU\n\nThe aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:\n\n```go\npackage main\n\nimport \"testing\"\n\nfunc BenchmarkCPUSilly1(b *testing.B) {\n\tfor i := 0; i < b.N; i++ {\n\t\t_ = i * i\n\t}\n}\n\nfunc BenchmarkCPUSilly2(b *testing.B) {\n\tvar sillyResult float64\n\tfor i := 0; i < b.N; i++ {\n\t\tsillyResult += float64(i)\n\t\tsillyResult *= float64(i)\n\t\tdivisor := float64(i) + 1\n\t\tif divisor > 0 {\n\t\t\tsillyResult /= divisor\n\t\t}\n\t}\n\t_ = sillyResult // to avoid compiler optimization\n}\n```\n\nYou can find the repository here:\n\n=> https://codeberg.org/snonux/sillybench\n\n### Silly FreeBSD host benchmark\n\nTo install it on FreeBSD, we run:\n\n```sh\npaul@f0:~ % doas pkg install git go\npaul@f0:~ % mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n```\n\nAnd to run it:\n\n```sh\npaul@f0:~/git/sillybench % go version\ngo version go1.24.1 freebsd/amd64\n\npaul@f0:~/git/sillybench % go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4022 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4027 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.891s\n```\n\n### Silly Rocky Linux VM @ Bhyve benchmark\n\nOK, let's compare this with the Rocky Linux VM running on Bhyve:\n\n```sh\n[root@r0 ~]# dnf install golang git\n[root@r0 ~]# mkdir ~/git && cd ~/git && \\\n git clone https://codeberg.org/snonux/sillybench && \\\n cd sillybench\n````\n\nAnd to run it:\n\n```sh\n[root@r0 sillybench]# go version\ngo version go1.22.9 (Red Hat 1.22.9-2.el9_5) linux/amd64\n[root@r0 sillybench]# go test -bench=.\ngoos: linux\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1-4 1000000000 0.4347 ns/op\nBenchmarkCPUSilly2-4 1000000000 0.4345 ns/op\n```\nThe Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.\n\n### Silly FreeBSD VM @ Bhyve benchmark\n\nBut as I am curious and don't want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.\n\nBut here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and GB of RAM; the benchmark won't use as many CPUs anyway):\n\n```sh\nroot@freebsd:~/git/sillybench # go test -bench=.\ngoos: freebsd\ngoarch: amd64\npkg: codeberg.org/snonux/sillybench\ncpu: Intel(R) N100\nBenchmarkCPUSilly1 1000000000 0.4273 ns/op\nBenchmarkCPUSilly2 1000000000 0.4286 ns/op\nPASS\nok codeberg.org/snonux/sillybench 0.949s\n```\n\nIt's a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!\n\n## Benchmarking with `ubench`\n\nLet's run another, more sophisticated benchmark using `ubench`, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running `doas pkg install ubench`. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with `-s`, and then let it run at full speed in the second run.\n\n### FreeBSD host `ubench` benchmark\n\nSingle CPU:\n\n```sh\npaul@f0:~ % doas ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 671010 (0.40s)\nUbench Single MEM: 1705237 (0.48s)\n-----------------------------------\nUbench Single AVG: 1188123\n\n```\n\nAll CPUs (with all Bhyve VMs stopped):\n\n```sh\npaul@f0:~ % doas ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2660220\nUbench MEM: 3095182\n--------------------\nUbench AVG: 2877701\n```\n\n### FreeBSD VM @ Bhyve `ubench` benchmark\n\nSingle CPU:\n\n```sh\nroot@freebsd:~ # ubench -s 1\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench Single CPU: 672792 (0.40s)\nUbench Single MEM: 852757 (0.48s)\n-----------------------------------\nUbench Single AVG: 762774\n```\n\nAll CPUs:\n\n```sh\nroot@freebsd:~ # ubench\nUnix Benchmark Utility v.0.3\nCopyright (C) July, 1999 PhysTech, Inc.\nAuthor: Sergei Viznyuk <sv@phystech.com>\nhttp://www.phystech.com/download/ubench.html\nFreeBSD 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\nUbench CPU: 2652857\nswap_pager: out of swap space\nswp_pager_getswapspace(27): failed\nswap_pager: out of swap space\nswp_pager_getswapspace(18): failed\nApr 4 23:02:43 freebsd kernel: pid 862 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nswp_pager_getswapspace(6): failed\nApr 4 23:02:46 freebsd kernel: pid 863 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:47 freebsd kernel: pid 864 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:48 freebsd kernel: pid 865 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:49 freebsd kernel: pid 861 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\nApr 4 23:02:51 freebsd kernel: pid 839 (ubench), jid 0, uid 0, was killed: failed to reclaim memory\n```\n\nThe multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.\n\nAlso, during the benchmark, I noticed the `bhyve` process on the host was constantly using 399% of the CPU (all 4 CPUs).\n\n```\n PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND\n 7449 root 14 20 0 14G 78M kqread 2 2:12 399.81% bhyve\n```\n\nOverall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I don't know whether to trust it. Do you have an idea?\n\n### Rocky Linux VM @ Bhyve `ubench` benchmark\n\nUnfortunately, I wasn't able to find `ubench` in any of the Rocky Linux repositories. So, I skipped this test.\n\n## Conclusion\n\nHaving Linux VMs running inside FreeBSD's Bhyve is a solid move for future F3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes) in the Linux world while keeping the steady reliability of FreeBSD.\n\nFuture uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?\n\nThis flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it's a nice setup for getting the most out of my hardware and keeping things running smoothly.\n\nSee you in the next blog post of this series. Maybe we will be installing highly available storage with HAST or we start setting up k3s on the Rocky Linux VMs.\n\nOther *BSD-related posts:\n\n<< template::inline::rindex bsd\n\nE-Mail your comments to `paul@nospam.buetow.org`\n\n=> ../ Back to the main site\n","languageId":"markdown","version":2347} + +APP 2025-04-04T20:20:33.107Z --> calling completion event + +APP 2025-04-04T20:20:33.107Z --> sending diagnostics | [{"message":"Fetching completion...","severity":3,"range":{"start":{"line":350,"character":0},"end":{"line":351,"character":0}}}] + +APP 2025-04-04T20:20:33.107Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[{"message":"Fetching completion...","severity":3,"range":{"start":{"line":350,"character":0},"end":{"line":351,"character":0}},"source":"helix-gpt"}]}} + +APP 2025-04-04T20:20:33.107Z --> copilot | completion request + +APP 2025-04-04T20:20:33.108Z --> fetch | /v1/engines/copilot-codex/completions + +APP 2025-04-04T20:20:33.117Z --> received didChange | language: markdown | contentVersion: 2348 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:20:33.576Z --> received didChange | language: markdown | contentVersion: 2349 | uri: file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl + +APP 2025-04-04T20:20:33.591Z --> response | https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions | 200 + +APP 2025-04-04T20:20:33.591Z --> completion hints: | 14GB RAM): + +APP 2025-04-04T20:20:33.592Z --> sent request | {"jsonrpc":"2.0","id":489,"result":{"isIncomplete":false,"items":[{"label":"14GB RAM):","kind":1,"preselect":true,"detail":"14GB RAM):","insertText":"14GB RAM):","insertTextFormat":1,"additionalTextEdits":[{"newText":"","range":{"start":{"line":350,"character":174},"end":{"line":350,"character":230}}}]}]}} + +APP 2025-04-04T20:20:33.592Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + +APP 2025-04-04T20:20:43.108Z --> sent request | {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:///home/paul/git/foo.zone-content/gemtext/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl","diagnostics":[]}} + diff --git a/gemfeed/index.html b/gemfeed/index.html index df598414..a7cdbad6 100644 --- a/gemfeed/index.html +++ b/gemfeed/index.html @@ -15,6 +15,7 @@ <br /> <h2 style='display: inline' id='to-be-in-the-zone'>To be in the .zone!</h2><br /> <br /> +<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 - f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br /> <a class='textlink' href='./2025-03-05-sharing-on-social-media-with-gos.html'>2025-03-05 - Sharing on Social Media with Gos v1.0.0</a><br /> <a class='textlink' href='./2025-02-08-random-weird-things-ii.html'>2025-02-08 - Random Weird Things - Part Ⅱ</a><br /> <a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 - f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br /> |
